<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[georgetheka.com]]></title><description><![CDATA[georgetheka.com]]></description><link>https://georgetheka.com/</link><generator>Ghost 4.1</generator><lastBuildDate>Fri, 03 Oct 2025 20:49:32 GMT</lastBuildDate><atom:link href="https://georgetheka.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Cutting Corners]]></title><description><![CDATA[<p>On Monday morning, Jimmy pulls up in front of Starbucks at 8:16 am on his way to work. He decides to forgo paying for parking. It only costs $0.25 / 15 minutes but the hassle of navigating the parking app on his phone isn&#x2019;t worth it. &#xA0;</p>]]></description><link>https://georgetheka.com/cutting-corners/</link><guid isPermaLink="false">64c833018d191a584d6e5a1d</guid><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Mon, 31 Jul 2023 04:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2023/08/linkedin-post.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2023/08/linkedin-post.png" alt="Cutting Corners"><p>On Monday morning, Jimmy pulls up in front of Starbucks at 8:16 am on his way to work. He decides to forgo paying for parking. It only costs $0.25 / 15 minutes but the hassle of navigating the parking app on his phone isn&#x2019;t worth it. &#xA0;&quot;I&apos;ll only be a few minutes,&quot; he reassures himself. After ordering his favorite drink and paying $6.38, Jimmy patiently waits for his name to be called. &#x201C;A venti caramel latte for Timmy,&#x201D; someone yells. Eight minutes later, he heads back to his car and drives off. The following days, Jimmy repeats this routine. However, on Thursday, as he leaves the store, his heart sinks when he notices an orange envelope placed on his windshield&#x2014;a $60 parking violation ticket.</p><p>It&#x2019;s not uncommon for companies that build new products to cut corners in areas where they should aim to raise the bar. Instead, they put their energies into building more feature rich products, a strategy that increases execution risks, and hurts the company&#x2019;s short and long term chances of success.</p><p>I would like to argue that there are three reasons why.</p><ol><li>Leadership may lack experience in building products in a lean, and agile way. Building a strong culture is an absolutely essential ingredient but it may not be enough. It is considerably more complex to build a team that can establish really strong engineering, operational, and data-driven strategic disciplines that are designed for scale. It requires leaders with enthusiasm and ability to articulate their vision, but also equipped with the hard skills to be able to steer the technical strategy towards the right direction.</li><li>The product in question is just not mission critical for the company, both in potential value as well as investment risk. If the product fails to deliver, it doesn&#x2019;t mean the company fails. The company knows what it needs to do to maximize the chance of success but it intentionally chooses to divert its investments into its top priorities, making a calculated bet.</li><li>While (1) or (2) represent a small number of orgs, a mix of (1) and (2) (lack of experience combined with prioritization challenges) describes a 2-D plane where most companies that build products are fighting their daily battles. In this place, it isn&#x2019;t always quite clear if one decision is better than another and progress can be difficult. Companies are also constantly moving up/down and left/right on this plane as they battle competition, an ever-changing workforce, and a fast evolving tech ecosystem.</li></ol><p>There are ways to steer the ship more towards a successful destination by first recognizing and understanding that some easy-looking strategies are not always the most successful ones. Examples:</p><ul><li>Funding a team to &quot;go away somewhere&quot; and come back with a product ready for launch in 10 months is rarely the right strategy for building many products. Instead, a team should be tasked with bringing a much smaller version of that &#xA0;product to a limited number of customers much sooner so their feedback can help inform/adjust the strategy. This is just the Agile philosophy at work.</li><li>Deprioritizing necessary infrastructure and processes such as having proper end to end deployment automation and rollback, collaborative tooling, automated security, increases daily toil, dampens collaboration, and kills innovation. These things may be not very exciting to build but they are critical in maintaing velocity as the product scales and the team grows.</li><li>Not setting a high enogh bar for the quality of the system design, the written code, the testing tools and release processes, not establishing the means for tracking key success/failure metrics &#x2013; these directly impact the product quality and set an upper ceiling for hiring and retaing the tech talent that becomes very difficult to raise later without a significant shift in strategy and investment.</li><li>Underinvesting in the customer support experience by not creating an operationally mature, cross-functional culture where everyone in the company feels responsible for the customer outcomes, a culture where everyone understands the importance of reacting to customer issues swiftly, thoughtfully, and proactively working to improve them.</li><li>Underinvesting in building data &#x201C;eyes&#x201D; into an active product from the very beginning and as part of the implementation strategy. Leading a product that lacks data analytics, operational and technical metrics is a lot like flying an airplane through clouds with navigational instruments turned off.</li><li>Not consistenly reviewing and updating the execution strategy on a regular basis so it can be frequently course-corrected for what is being learned through each iteration and customer feedback.</li></ul>]]></content:encoded></item><item><title><![CDATA[Foundational Arpeggios]]></title><description><![CDATA[<p>Recently, I began practicing guitar more regularly after a nearly 14 year practicing hiatus and I wanted to share some of those routines. In this post I will cover some fundamental arpeggios which are an excellent mechanism for building up fluidity across the fretboard.</p><p>While typical arpeggio patterns you may</p>]]></description><link>https://georgetheka.com/basic-arpeggios/</link><guid isPermaLink="false">6057f2c3f357351b3208d8bc</guid><category><![CDATA[guitar]]></category><category><![CDATA[music]]></category><category><![CDATA[practice]]></category><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Wed, 22 Feb 2023 00:45:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2023/02/Untitled.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2023/02/Untitled.png" alt="Foundational Arpeggios"><p>Recently, I began practicing guitar more regularly after a nearly 14 year practicing hiatus and I wanted to share some of those routines. In this post I will cover some fundamental arpeggios which are an excellent mechanism for building up fluidity across the fretboard.</p><p>While typical arpeggio patterns you may have seen focus on common triadic shapes, I want to focus on 7th chord 2-note per string patterns that are repetitive and can easily expand diagonally on the fretboard.</p><p>I typically categorize them by chord family, flavor, and length.</p><ul><li>Chord Families: Major, Dominant, Minor, Diminished</li><li>Chord Flavors: flattened fifth (b5), sharpened or augmented fifth (#5)</li><li>Length: ~2.5 octaves and ~3+ octaves (requiring a jump somewhere)</li></ul><p>Generally, I focus on two types of patterns for each chord/flavor/length: arpeggios beginning on the lower E string, and arpeggios that begin on the A string. These end up hitting different paths in the fretboard, even though they share common shapes.</p><p>In either case, I follow a personal rule for the 3-octave arpeggios - they should aim to expand on the D string whenever possible (i.e. it requires playing 2 notes followed by a jump on the same string and then playing 2 more notes higher). The reason is simple: the D string is preceded and succeeded by strings tuned in 4ths and therefore there are no odd pitch shifts happening right after the expansion, as would be the case with the G string followed by B, a major 3rd up. I also try to avoid these jumps in the B and E strings as they have notes in the melody register and expansions are more challenging to execute in a seamless way. Lastly, I avoid the lower E, and A strings because of their low registers and intonation issues with these types of transitions. Overall, using the D-string tends to make lines more fluent and opens up possibilities in improvisation.</p><p>I specifically pay a lot of attention to the altered fifth versions of each chord as well as the diminished scale. These are not exotic flavors but they play a fundamental role in functional harmony. For example a Major 7 b5 arpeggio outlines the Dom 7 chord a whole step above it, a Minor 7 b5 arpeggio outlines the Dom 7 chord a major third below it, and so on.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/H9rjPB6a0ek?start=200&amp;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Expanding Foundational Arpeggios"></iframe><figcaption>E-String Patterns</figcaption></figure><p>Below are the patterns shown in the video.</p><h1 id="e-string-patterns">E-String Patterns<br></h1><h2 id="g-major-7">G Major 7</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|___|_4_|___|___||___|___|___|___/
b  ___||___|___|___|___|___|___|_1_|_2_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|___|_4_|___|___|___|___|___||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||___|_1_|_2_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|_3_|___\
g  ___||___|___|___|___|___|___|___|___|___|___|_1_|_2_||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|_1_|___|___|_4_||___|___|___|___\
a  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-major-7-b5">G Major 7 b5</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|_3_|___|___|___||___|___|___|___/
b  ___||___|___|___|___|___|___|_1_|_2_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|_3_|___|___|___|___|___|___||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|_3_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||___|_1_|_2_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|_3_|___|___\
g  ___||___|___|___|___|___|___|___|___|___|___|_1_|_2_||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|_1_|___|_3_|___||___|___|___|___\
a  ___||___|_1_|___|_3_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-major-7-5">G Major 7 #5</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|___|___|_4_|___||___|___|___|___/
b  ___||___|___|___|___|___|___|_1_|_2_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|___|___|_4_|___|___|___|___||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||___|_1_|_2_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|___|_4_\
g  ___||___|___|___|___|___|___|___|___|___|___|_1_|_2_||___|___|___|___/
d  ___||___|___|___|_1_|_2_|___|___|___|_1_|___|___|___||_4_|___|___|___\
a  ___||___|_1_|___|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
e  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-dominant-7">G Dominant 7</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|___|_4_|___|___||___|___|___|___/
b  ___||___|___|___|___|___|_1_|___|_3_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|___|_4_|___|___|___|___|___||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||_1_|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|_3_|___\
g  ___||___|___|___|___|___|___|___|___|___|_1_|___|_3_||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|_1_|___|___|_4_||___|___|___|___\
a  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-dominant-7-b5">G Dominant 7 b5</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|_3_|___|___|___||___|___|___|___/
b  ___||___|___|___|___|___|_1_|___|_3_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|_3_|___|___|___|___|___|___||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|_3_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||_1_|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|_3_|___|___\
g  ___||___|___|___|___|___|___|___|___|___|_1_|___|_3_||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|_1_|___|_3_|___||___|___|___|___\
a  ___||___|_1_|___|_3_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-dominant-7-5">G Dominant 7 #5</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|___|___|___|_4_|___||___|___|___|___/
b  ___||___|___|___|___|___|_1_|___|_3_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|___|___|___|_4_|___|___|___|___||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|___|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||_1_|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|___|_4_\
g  ___||___|___|___|___|___|___|___|___|___|_1_|___|_3_||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|_1_|___|___|___||_4_|___|___|___\
a  ___||___|_1_|___|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-minor-7">G Minor 7</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|_1_|___|___|___|_4_|___|___||___|___|___|___/
b  ___||___|___|___|___|___|_1_|___|_3_|___|___|___|___||___|___|___|___\
g  ___||___|___|_1_|___|___|___|_4_|___|___|___|___|___||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||_1_|___|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||_1_|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|_1_|___||___|___|_4_|___\
g  ___||___|___|___|___|___|___|___|___|___|_1_|___|_3_||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|_1_|___|___|___|_4_||___|___|___|___\
a  ___||_1_|___|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-minor-7-b5">G Minor 7 b5</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|_1_|___|___|_4_|___|___|___||___|___|___|___/
b  ___||___|___|___|___|___|_1_|___|_3_|___|___|___|___||___|___|___|___\
g  ___||___|___|_1_|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||_1_|___|___|_4_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|___||_1_|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|_1_|___||___|_4_|___|___\
g  ___||___|___|___|___|___|___|___|___|___|_1_|___|_3_||___|___|___|___/
d  ___||___|___|_1_|___|_3_|___|___|_1_|___|___|_4_|___||___|___|___|___\
a  ___||_1_|___|___|_4_|___|___|___|___|___|___|___|___||___|___|___|___/
e  ___||_1_|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h2 id="g-diminished">G Diminished</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|_1_|___|___|_4_|___|___|___||___|___|___|___/
b  ___||___|___|___|___|_1_|___|___|_4_|___|___|___|___||___|___|___|___\
g  ___||___|___|_1_|___|___|_4_|___|___|___|___|___|___||___|___|___|___/
d  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||_1_|___|___|_4_|___|___|___|___|___|___|___|___||___|___|___|___/
e  _0_||___|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|_4_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|_1_|___||___|_4_|___|___\
g  ___||___|___|___|___|___|___|___|___|_1_|___|___|_4_||___|___|___|___/
d  ___||___|_1_|___|___|_4_|___|___|_1_|___|___|_4_|___||___|___|___|___\
a  ___||_1_|___|___|_4_|___|___|___|___|___|___|___|___||___|___|___|___/
e  _0_||___|___|_3_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>3 Octaves</figcaption></figure><h1 id="a-string-patterns">A-String Patterns</h1><p>I couldn&apos;t cover these with another video due to time limitations.</p><h2 id="c-major-7">C Major 7</h2><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|_1_|_2_|___|___|___|___||___|___|___|___/
b  ___||___|___|___|___|_1_|___|___|_4_|___|___|___|___||___|___|___|___\
g  ___||___|___|___|_1_|_2_|___|___|___|___|___|___|___||___|___|___|___/
d  ___||___|_1_|___|___|_4_|___|___|___|___|___|___|___||___|___|___|___\
a  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___/
e  _0_||___|___|_1_|___|___|___|___|___|___|___|___|___||___|___|___|___\
                 o       o       o       o           o            o</code></pre><figcaption>2.5 Octaves</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>    0    1   2   3   4   5   6   7   8   9   10  11  12   13  14  15  16
e&apos; ___||___|___|___|___|___|___|___|___|___|___|___|_1_||___|___|_3_|___/
b  ___||___|___|___|___|___|___|___|___|___|___|___|_1_||_2_|___|___|___\
g  ___||___|___|___|___|___|___|___|___|_1_|___|___|_4_||___|___|___|___/
d  ___||___|_1_|___|___|_4_|___|___|___|_1_|_2_|___|___||___|___|___|___\
a  ___||___|_1_|_2_|___|___|___|___|___|___|___|___|___||___|___|___|___/
e  _0_||___|___|_1_|___|___|___|___|___|___|___|___|___||___|___|___|___\</code></pre><figcaption>3 Octaves</figcaption></figure><p>I am going to leave it up to the reader to come up with patterns the rest of these arpeggios. When doing so, keep in mind three things:</p><ul><li>Try expanding them along the D string for reasons mentioned above</li><li>Try substituting a 4th finger with a 3rd finger for any landing note above the 12th fret because frets are closer together in that region. When viable, 3 is usually easier to play than 4.</li><li>Try maintain a 2-note per string pattern, even if sometimes it may lead to odd shapes and uncomfortable fingerings. I would argue that some of the clustered triadic shapes which are trivial to play really don&apos;t lend themselves well for expanding across the board without sacrificing line fluidity and range.</li></ul><p>Keep in mind that this stuff works for me, it should generally work for others, but it may also not work for some. Ultimately you have to find a good balance between borrowing others&apos; ideas and ideas that work well for you. </p><p>I hope this was helpful in some way and I welcome any feedback.</p>]]></content:encoded></item><item><title><![CDATA[Roblox!]]></title><description><![CDATA[My kids love playing Roblox on their iPads. But they can only play on the weekends and they have a 2hr time limit per day. ]]></description><link>https://georgetheka.com/roblox/</link><guid isPermaLink="false">63d2d3113072d31a1a70b6a9</guid><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Thu, 26 Jan 2023 13:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2023/01/roblox.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2023/01/roblox.png" alt="Roblox!"><p>My kids love playing Roblox on their iPads. But they can only play on the weekends and they have a 2hr time limit per day. That&#x2019;s a lot less time they want and my wife and I are in constant negotiations with them to extend their time limits. &#xA0;So, recently, I gave them a new challenge. You want to spend more time on Roblox and even do something cool? How about learning to design and code your own games? They looovvvved the idea!</p><p>We&#x2019;re in Week 2 and they are now using laptops, becoming comfortable with the Roblox Studio IDE (Roblox&#x2019; game building software), and they&#x2019;re chipping away at learning the Lua programming language for game interactivity. </p><p>I thought the experience would be a steep learning curve but I was wrong. They &#x201C;ditched&#x201D; the concepts and &#x201C;theory&#x201D; that I tried to explain carefully and instead dove straight into the fun, learning by trial and error. With just a little guidance, and a whole lot of relentless persistence and hyperfocus, their young brains can seem to remember complex UI shortcuts and how to type entire code segments in an effortless way.</p><p>It has made me think about a couple of things.</p><p>We sometimes oversimplify things for the new generation, forgetting that with the right support and some thoughtful guidance/instruction they can learn at incredible speeds. Instead, we lower the entry barrier because it makes it easier for us to roll out a teaching process. But the slowdown drags out the learning process, diminishes the rewards, and often destroys the most important element of learning - the motivating factor. Of course, there are times when it is worth slowing things down in order to build a strong foundation but it doesn&#x2019;t have to happen right away and it has to be balanced with reward in a dynamic way. It feels like we simply don&#x2019;t have great mechanisms to do this at scale yet but at a smaller group level (family, friends, work team), we have more opportunities to do this better.</p><p>As adults, we have a harder time shutting off distractions when pursuing an idea we are passionate about. My kids easily ignored the non-relevant elements to deliver on their goal to build a game. But as adults, when we take first steps, we often begin questioning ourselves based on our experience, insecurities, and our appetite for risk-taking. Is my approach correct? Should I read a book first? Are we doing it wrong/is there a better way? What risks am I taking? What if I fail? Of course, some of these questions are critically important to answer but those are not the ones we always prioritize to answer first. We are as rational as our emotions afford us to be, and, ultimately, even though we know exactly how to succeed, we manage to drag ourselves through the more difficult and longer path. This also applies to groups of people with a common goal such as a team or organization. <br></p><figure class="kg-card kg-image-card"><img src="https://lh3.googleusercontent.com/_W36cRyv7UbgHg7t_joG1t-1tEB6jtAsvbGy2LsQG8Zno9RU04bRm06Vl4kE-jTKDyaV0584rsnRzzqnyZbr5SgmcQR0fGhavFS6QIDZIkf732V-5RaxwBw8INRBWF69Pp_WrMocxHf2c-_ZHYHlugEnBOb15Hck2In8-WrvRHGSJP0Fgf4uLp9xVio0PQ" class="kg-image" alt="Roblox!" loading="lazy"></figure>]]></content:encoded></item><item><title><![CDATA[The Web's Getting Rusty]]></title><description><![CDATA[And so it begins: The Chromium team has announced that Chrome will begin to get a little Rust-y!]]></description><link>https://georgetheka.com/the-web-is-get-rusty/</link><guid isPermaLink="false">63d2dadd3072d31a1a70b6c0</guid><category><![CDATA[rust]]></category><category><![CDATA[c++]]></category><category><![CDATA[web]]></category><category><![CDATA[software]]></category><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Fri, 20 Jan 2023 15:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2023/01/zdenek-machacek-PEy4qZCLXss-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2023/01/zdenek-machacek-PEy4qZCLXss-unsplash.jpg" alt="The Web&apos;s Getting Rusty"><p><em>Photo by <a href="https://unsplash.com/@zmachacek?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Zden&#x11B;k Mach&#xE1;&#x10D;ek</a> on <a href="https://unsplash.com/photos/PEy4qZCLXss?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></em></p><p>And so it begins: The Chromium team has announced that Chrome will begin to get a little Rust-y!</p><p><a href="https://security.googleblog.com/2023/01/supporting-use-of-rust-in-chromium.html">https://security.googleblog.com/2023/01/supporting-use-of-rust-in-chromium.html</a><br><br>They now support a Rust tool-chain for third-party libraries by enabling inter-operation between it and Chromium&apos;s code-base, written in C++.<br><br>Why I think it matters: Chrome is by far the most popular browser with a 75%+ market share. Rust is a modern systems-level language that provides better security, performance, and results in better management of large+complex code-bases. The long-term move (and it would take a long time) towards Rust should make browsing experience safer and better for everyone. <br><br>Wait, is that it? No. Last year Linux kernel 6.1 also enabled support for Rust, another significant signal that perhaps there is a new technology in town that might rule the long-term future of everything digital? This technology could be safer, more performant, more energy efficient, and make technology more accessible at scale.<br><br>But, how do we get there?<br><br>In 2023, the world still runs on COBOL, a ~65 year old language. Some 250+ billion lines of code run our banks, hospitals, infrastructure, and the government. Decades later, we haven&#x2019;t found a good way to replace COBOL with any of the dozen or so modern languages used today. The cost of that work is extremely high, and the effort is too dangerous. Imagine how much disruption an FAA system caused with just a few hours of downtime just a week ago.<br><br>Now, compare the COBOL bits with the code that runs the rest of our technology: every computer, phone, microwave, electronic toy, car engine, AI, robot, and the very browser with which I am typing this post. With some exceptions, all of it is/was built using C or C++, two very similar yet separate languages with some strong common roots and interoperability.<br><br>These languages have, too, been modernized over the years. The newest version of C++ for example, C++23, is actually a great language and if you ignore many of the older features it still supports for backwards compatibility, it is quite comparable to other popular technologies. But the older features are precisely what are being used to manage existing code-bases, like Chromium (Chrome&#x2019;s foundational code-base) so those features are here to stay. Given the enormous scope, C/C++ also have very large and active communities who are innovating day by day to deliver the things that others build upon -- the foundations of all technology: operating systems, networking equipment, system programs, device controls, game engines, microcontroller programs, etc.<br><br>How do we absorb this cost as a society? Who&#x2019;s got the money and the time to rewrite the world? Are the tradeoffs worth it? Are the benefits better than marginal gains? Does the strategy of enabling a little Rust in more projects really lead to a long-term transformation at scale without something much more fundamental? Does extreme automation via AI and ML in the near future perhaps enable this?<br><br>What do you think?</p>]]></content:encoded></item><item><title><![CDATA[Adaptive Architecture]]></title><description><![CDATA[Today's distributed software systems are typically designed with just enough complexity to satisfy basic functional requirements. But for products designed for hyper-growth, non-functional requirements like performance and resource costs could become functional requirements at scale.]]></description><link>https://georgetheka.com/adaptive-architecture/</link><guid isPermaLink="false">6057e652f357351b3208d885</guid><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Sun, 22 Aug 2021 23:11:19 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2021/08/intro.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2021/08/intro.png" alt="Adaptive Architecture"><p><em>Note: This post is about distributed systems design</em></p><h3 id="contents">Contents</h3><ol><li>Introduction</li><li>An Example Problem</li><li>An Adaptive Design Solution</li><li>An Example Implementation</li><li>Results</li><li>Conclusion</li></ol><p></p><h2 id="1-introduction">1. Introduction</h2><p>Today&apos;s distributed software systems are typically designed with just enough complexity to satisfy basic functional requirements. This approach makes sense because a simple initial design means shipping a product faster and cheaper, therefore reducing the short-term risk to market.</p><p>But for products designed for hyper-growth, non-functional requirements like performance and resource costs could become functional requirements at scale. And while performance related to internal design choices are easier to address down the road, it is those that depend on external factors such as vendors that may not become obvious until sometime later. That is why it is extremely important to invest in evaluating, benchmarking and testing external factors with more scrutiny as part of the discovery phase and before major dev investment has begun. Once identified, these cases may require thinking about an elastic design up-front as a way to scale product down the road.</p><p>There&apos;s a great <a href="https://www.youtube.com/watch?v=RT46MpK39rQ">talk</a> by now-retired C++ expert, Scott Meyers, where he beautifully explains how Donald Knuth&apos;s famous quote &quot;...premature optimization is the root of all evil..&quot; has often been relayed by leaving out some crucial context. The quote was meant for the little optimizations in a system, not foundational system design.</p><hr><h2 id="2-an-example-problem">2. An Example Problem</h2><p>Let&apos;s illustrate this with a fictional problem. A team builds a software product consisting of an App and its backend API. The product will allow users to fetch the current balance of a digital wallet on demand so they can make a real time decision about something they want to buy.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-3.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="1342" height="472"></figure><p>In order to fetch the wallet balance, the API must forward the request to a vendor&apos;s API which provides wallet access. In this scenario, the vendor is really the &quot;server&quot; and the client API is mostly a proxy.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-5.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="1406" height="776"></figure><p>Initial user experience studies show that most users will check their balance at lunch time and after work. Peak days will be Thu, Fri, and Sat. This means the largest number of API requests, and therefore, vendor API activity will also vary over time.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-6.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="1366" height="1062"></figure><p>In addition, the product team concludes that the quality of user experience will be driven by two important factors: </p><ul><li>UI speed - which depends on vendor response time. Obviously, the client can cache server responses but won&apos;t solve for the next factor...</li><li>Data accuracy - which depends on vendor&apos;s ability to resolve on-demand requests reliably</li></ul><p>The product team projects technical benchmarks and speaks with the vendor which assures the team that the service will deliver what they need. Here&apos;s a recap of what the fictional contract looks like.</p><hr><p><em>SLA Contract With Client. Vendor guarantees to</em></p><ul><li><em>support up to 100 client concurrent requests 99.95% of the time</em></li><li><em>deliver a response time of 20ms for 99.95% of all client requests</em></li><li><em>successfully respond to 99.95% of all valid client requests</em></li></ul><hr><p>Great. The team implements the integration and is ready to go to market. Then reality sets in. As the number of users grows, the team notices that the SLAs aren&apos;t always observed. The problem starts to get worse as the product becomes more popular. Certain user segments are hit even harder for unknown reasons.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-7.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2408" height="1288"></figure><p>After a lot of back and forth with the vendor, the team realizes that the vendor won&apos;t be able to deliver better results anytime soon. Moving to a new vendor won&apos;t be possible at this stage as it is a strategic move that will require restarting a complex negotiation process and a new technical implementation, and a migration process that may take years.</p><h2 id="3-an-adaptive-design-solution">3. An Adaptive Design Solution</h2><p>One potential solution would be to loosen up the synchronous and static design of the system and introduce a few components starting from the vendor to the user:</p><ul><li><strong>A forward proxy</strong> to act as a broker between the two parties. Aside from transparent communication to and from the server, implementing basic resiliency patterns (i.e. simple retry, timeout circuit breakers, or even fallbacks to local caches for certain cases), the proxy&apos;s other important role is to provide telemetry to the client about the server&apos;s performance.<br></li><li><strong>A client</strong> which will own the communication strategy with the vendor<br></li><li><strong>A queue</strong> which will act as ephemeral storage for queuing requests<br></li><li><strong>A producer</strong> service which is responsible for brokering requests for the API. It allows the API to focus on serving the App while producing requests to the client on its behalf.<br></li><li><strong>A telemetry solution</strong> not shown for simplicity. Instead the dotted arrows show the feedback loops enabled by this telemetry.</li></ul><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-11.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="1968" height="1188"></figure><h3 id="31-the-approach">3.1. The Approach</h3><p>Let&apos;s start with a workbench test. First, we will simplify the system by temporarily ignoring the App and the API and focus on the producer which can be artificially configured to continuously generate requests that are uniformly distributed at <em>R<sub>API</sub>=max</em> measured API traffic (requests for second). This will ensure that traffic peaks are covered. In this context, the producer is greedy: <em>I must produce at R<sub>API</sub> rate at all times and at all cost. I will also accept anything within the range [min, R<sub>API</sub>] if necessary but I will continue pushing for R<sub>API</sub> over time. I won&apos;t give up</em>.</p><p>Second, let&apos;s develop a client that is smart enough to negotiate with the producer: <em>ok, I will help you, producer, but I will let you know when you might want to slow things down or speed things up. I will take care of the rest.</em></p><p>Third, let&apos;s implement a queue which will provide a back-pressure absorbent layer between the client and the producer so that shockwaves from the vendor server never reach the API. We&apos;ll assume the queue in this example has much higher throughput than the production rate <em>R<sub>API</sub></em>.</p><p>Finally, what is left is to figure out how to implement the smart client. Let&apos;s just consider the three SLAs being broken one at a time.</p><h3 id="311-scenario-1-changes-to-server-rate-or-concurrent-requests-limits">3.1.1. Scenario 1: Changes to server rate or concurrent requests limits</h3><p>Server and concurrency limits should not have any effect on the client system as long as server rate <em>R<sub>s</sub></em> and server concurrency rate <em>P<sub>s</sub></em> &#xA0;(P for pool size) are such that </p><p><em>		P<sub>API</sub> &lt;&lt; P<sub>s</sub> </em>			and 			<em>R<sub>API</sub> &lt;&lt; R<sub>s</sub></em></p><p>Otherwise, the server will begin responding to impacted requests with a typical <em>429/TooManyRequests</em><strong> </strong>response code. (Remember we are using simple HTTP for this example). The client must put these requests back to the queue for reprocessing and then signal the producer to slow down if re-enqueuing reaches a threshold. Given, the client doesn&apos;t know what the new viable rate must be, the feedback for the producer is intentionally open. The producer then lowers the ceiling of the <em>[min, R<sub>NEW</sub>]</em> &#xA0;where</p><p><em>		R<sub>NEW</sub> = 0.75 x R<sub>API</sub></em></p><p>It continues lowering the rate recursively until the client&apos;s feedback is no longer negative. The producer must also memorize the new ceiling <em>R<sub>NEW</sub> </em>and attempt to raise it at a less frequent interval given that client signal is positive, following an exponential curve that will continuously approach <em>R<sub>API</sub></em> again. This is important to help the producer to make but continuous changes to its production plan rather than reacting dramatically to feedback which can result in calibrating difficulties for the producer-client operation.</p><h3 id="312-scenario-2-changes-to-response-time">3.1.2. Scenario 2: Changes to response time</h3><p>The way to make up for lost time per request is to send more requests in parallel, thus increasing computational costs. <a href="https://en.wikipedia.org/wiki/Little%27s_law">Little&apos;s Law</a> can help figure out how many. It says that, given a stationary system</p><p><em>		L = &#x3BB;W &#xA0; &#xA0; &#xA0; &#xA0; &#xA0; &#xA0; &#xA0; &#xA0; 		</em>where</p><p><em>L</em> - is long-term <em>average</em> of number of customers in a stationary system<br><em>&#x3BB;</em> - is the long-term <em>average</em> effective arrival rate<br><em>W</em> - is the <em>average</em> time the customer spends in the system</p><p>Note a couple of important things. This formula talks about averages. This formula isn&apos;t computer-science specific. It may as well apply to people waiting to enter a large football stadium with multiple lines and gates. We can redefine it to fit our problem. </p><p><em>a customer</em> <strong>is</strong> <em>a client request</em><br><em>arrival rate</em> <strong>is</strong> <em>production rate</em><br><em>time spent in system <strong>is</strong></em> <em>request time</em></p><p>then we can rewrite the relationship above as</p><p><em>		P<sub>API</sub> = R<sub>API</sub> T</em></p><p>where</p><p><em>P<sub>API</sub> - </em> avg # concurrent requests which defines a client &quot;thread&quot; pool size P * <br><em>R<sub>API</sub> - the average production rate</em><br><em>T</em> - the response time that includes server, proxy, and client time</p><p><em>* Given our scale context, it&apos;s likely by &quot;thread&quot; here we mean a type of virtual thread. We are talking about potentially hundreds or even thousands of threads on a web server which are only possible using reactive/async solutions and not native threads. In the implementation example below I have used goroutines to achieve and control concurrency for this purpose.</em></p><p>With this approach we have enabled a way to increase the number of concurrent threads to <em>P<sub>API</sub></em> maintain <em>R<sub>API</sub> </em>as long as <em>P<sub>API</sub> &lt;&lt; P<sub>s</sub> </em>from scenario 1 is true.</p><h3 id="313-scenario-3-changes-to-response-success-rate">3.1.3. Scenario 3: Changes to response success rate</h3><p>An accumulator on the client tracks request failure count. If this average rises above a critical predetermined threshold, the client can do a few things:</p><ul><li>can alert the producer to slow down, just like in scenario 1, because the root cause of functional issues is often related to saturation-related conditions</li><li>alert the team (this is where manual intervention is finally needed)</li><li>retry the failed requests by re-enqueueing and serve cached data along with proper messaging for the user</li></ul><hr><h2 id="4-an-example-implementation">4. An Example Implementation</h2><p>How does this type of design work in practice? How can one build such a system? </p><p>For demo purposes I implemented a distributed system example with Golang so that I could take advantage of Goroutines, a type of lightweight virtual thread solution that comes with the native Go runtime. However, similar implementations can be delivered with other reactive/async architectures in any technology stack. This solution spins up several services which are already pre-configured to talk to one another on localhost.<br><br>You can find the entire source code here: <a href="https://github.com/georgetheka/adaptive-architecture">https://github.com/georgetheka/adaptive-architecture</a></p><p>In order to test the system just build and run using this steps:</p><pre><code>make install
make build
make run</code></pre><p>In order to stop the system use:</p><pre><code>make stop</code></pre><p>For more details see the README file:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/georgetheka/adaptive-architecture/blob/main/README.md"><div class="kg-bookmark-content"><div class="kg-bookmark-title">adaptive-architecture/README.md at main &#xB7; georgetheka/adaptive-architecture</div><div class="kg-bookmark-description">Contribute to georgetheka/adaptive-architecture development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Adaptive Architecture"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">georgetheka</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/f4d8caa9c19bb345ac14b0a4a944dcc7d2088cde01460d871f9abe026631f190/georgetheka/adaptive-architecture" alt="Adaptive Architecture"></div></a></figure><hr><h2 id="5-results">5. Results</h2><p>Fig. 1 shows a the stable system running with the following characteristics:</p><p><em>Producer Rate: R<sub>API</sub> = 100 requests / second</em><br><em>Thread Pool Size: P<sub>API</sub> = 10 (goroutines)</em><br><em>Response Time AVG: T = 100ms</em></p><p>The actual production rate is about 85 requests / second. It means the system is working at about 85% efficiency due to some loss in synchronization and optimization. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/100_1.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2394" height="1100"><figcaption>Fig 1. In this equilibrium state the system is using the minimum computational and network resources possible to deliver the required throughput.</figcaption></figure><h3 id="51-scenario-1-changes-to-server-rate-or-concurrent-requests-limits">5.1. Scenario 1: Changes to server rate or concurrent requests limits</h3><p>What if the rate limit suddenly dropped below the level agreed upon, to say only 50% of the required rate? We can reduce the rate in half by calling this endpoint:</p><pre><code>curl http://localhost:7777/reduceratelimit</code></pre><p>and then we can increase the rate limit by calling this endpoint:</p><pre><code>curl http://localhost:7777/increaseratelimit</code></pre><p>The client notices that too many requests are being returned due to the low rate limit and it absorbs the back-pressure from the server by re-enqueuing the events and notifying the producer to slow down. Producer then slows by guessing a lower rate limit and over time continues to adjust its rate up or down based on the client input. Once the client signals that the server rate limit stabilizes, the producer gradually increases the throughput to match the desired state. At the same time, the client scales down resources as well to re-achieve equilibrium state.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/200_2.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2412" height="1108"><figcaption>Fig 2. Rate is reduced then increased. System gradually stabilizes to desired throughput`</figcaption></figure><h3 id="52-scenario-2-changes-to-response-time">5.2. Scenario 2: Changes to response time</h3><p>Next, let&apos;s simulate a server slowdown by forcing the server to double the response time each time the following server endpoint is called.</p><pre><code>curl http://localhost:7777/slowdown</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/100_2.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2404" height="1118"><figcaption>Fig 3. Response time suddenly doubles</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/100_3-1.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2414" height="1104"><figcaption>Fig 4. Response time continues to rise</figcaption></figure><p>A few moments later, the system finds a new equilibrium point maintaining consistent throughput at the previous efficiency level but having acquired more resources (number of parallel workers) therefore at a higher but optimal cost for the new conditions. Notice the new pool size has self adjusted from 10 to 100 (in actuality ~90).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/100_4-1.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2414" height="1092"><figcaption>Fig 5. System has stabilized throughput at a higher but minimum cost for the new rate</figcaption></figure><p>Now, let&apos;s speed up the vendor system again. The following endpoint will cut the response time in half for every call. The system will then respond by recalibrating itself by reducing number of resources back to minimal levels.</p><pre><code>curl http://localhost:7777/speedup</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/100_5-1.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2414" height="1086"><figcaption>Fig 6. After response time normalization, system reacts by minimizing its resources`</figcaption></figure><h3></h3><h3 id="53-scenario-3-changes-to-response-success-rate">5.3. Scenario 3: Changes to response success rate</h3><p>The client is observing the number of failures and once a certain threshold is reached several things can happen:</p><ul><li>an alert is sent to the team (not implemented)</li><li>the producer can be alerted to slow down (not implemented)</li><li>errors can be re-enqueued for a second try (not implemented)</li></ul><p>Increase the probability of requests to fail randomly and uniformly using the following endpoint:</p><pre><code>curl http://localhost:7777/control?percentage_reqs_fail=20</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/300_2.png" class="kg-image" alt="Adaptive Architecture" loading="lazy" width="2418" height="1126"><figcaption>Fig 7. Error rate increase detected by the client`</figcaption></figure><h3 id="52-wait-one-second">5.2. Wait one second</h3><p>Hold on. I understand that the dynamics between the client &lt;&gt; producer are now changing to automatically adapt to the new conditions. But how does this translate to an improved user experience? For this to happen, additional infrastructure would be needed, not shown in the design. For example, an additional memory cache would be required to track the open state of not-yet-fulfilled app requests. The app itself will need the ability to switch between a sync and an async + polling strategy with an exponential backoff pattern once the API response indicates the result will be delayed. For most active users, an LRU cache could feed a background process that continuously requests and updates info ahead of time.</p><h2 id="6-conclusion">6. Conclusion</h2><p>This example strives to demonstrate some ways to design distributed systems that can adapt their performance against external factors when operating at the desired scale.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Scaling Up By Scaling Down]]></title><description><![CDATA[One way to build up your  playing using tetrachords]]></description><link>https://georgetheka.com/guitar-scaling-up-by-scaling-down/</link><guid isPermaLink="false">60e72e27f357351b3208d8ed</guid><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Wed, 18 Aug 2021 06:15:28 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2021/08/blog-fig-6-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2021/08/blog-fig-6-1.png" alt="Scaling Up By Scaling Down"><p>Note: this is a blog post for guitarists and musicians.</p><p>It&apos;s not uncommon for aspiring guitarists to begin the journey of lead and improv guitar using this scale:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-1a.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3853" height="1554"><figcaption>The most common C-Major scale pattern</figcaption></figure><p>But it becomes obvious early on that these patterns cannot be used for playing more fluid, expressive lines. What&apos;s the solution? Three-note-per-string patterns.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-2a.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3842" height="1544"><figcaption>The three-notes-per-string C-Major scale pattern</figcaption></figure><p>Okay, now the lines are more fluent but playing key changes hasn&apos;t gotten any easier. It requires shifting patterns up and down the neck which breaks line continuity, musical thinking, and just makes the guitar a challenging instrument to play anything containing more complex harmony than a major or minor key.</p><p>Some advanced guitarists also experiment with four-note-per-string scales and patterns. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-3.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3860" height="1536"><figcaption>A four-notes-per-string C-Major scale pattern</figcaption></figure><p>But there&apos;s a problem: four-note-per-string scales go up/down in fourths <strong>and</strong> guitars are also tuned in fourths (mostly). That requires inevitable shifting in both X and Y axis of the fretboard. It is pretty challenging to play guitar this way. It is difficult to build the muscle memory in a two-dimensional plane when there are no anchors (i.e. physical traits) to reliably help the player find their way on the fretboard without continuous eye-hand coordination. </p><p>Even though playing diagonally gives you the best range, the best intonation, and often it is how people <em>appear</em> to play, most have developed and internalized a framework of positional and linear anchors to get them there.</p><hr><p>But there&apos;s also another common issue with all of the above patterns. They tend to focus on the lower strings which, in guitar, fall a whole octave below middle C and not suitable for melodic, fluent, or expressive playing. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-4a.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="1900" height="508"><figcaption>Common scale patterns start an octave or lower below middle C</figcaption></figure><h2 id="scaling-it-down">Scaling it Down</h2><p>One way to address this issue is to break it down into the smallest pieces and build it back up from there. Let&apos;s start by finding the smallest fretboard block that could allow playing in any key and require minimal movement and shifting.</p><p>I propose the range from <strong>C5-F5</strong>, (extended with B4 or F#5 as needed) on the E string for several reasons:</p><ul><li>It is natural middle-point in the fretboard and allows for instrument to stay in equilibrium without physical effort</li><li>The frets are not too big or small. In this area, the average guitarist should be able to play a full perfect fourth between the index and the pinkie fingers. (This range typically goes down to a major third in the largest frets, and up to a tritone or more in the smallest frets)</li><li>This is an expressive range for playing melodies (alto-soprano range) and the notes are easy to identify for those getting started</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-41.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3129" height="2331"><figcaption>Illustration of the C5 to F5 range</figcaption></figure><h2 id="introducing-guitar-tetris">Introducing Guitar Tetris</h2><p>Great. But how can one play in this range? Here come tetrachords - four note patterns that can help. Let&apos;s focus on the major scales for simplicity. Four tetrachords can cover all keys:</p><ul><li>WWH</li><li>WHW</li><li>HWW</li><li>WWW</li></ul><p>where W = whole-tone interval between two notes (e.g. C - D)<br>and H = half-tone interval between notes (e.g. E - F)</p><p>For example, the four notes: <em>C, D, E, F</em> form a <em>W-W-H</em> pattern and they cover different parts of each major scale for C Major and F Major.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-5-1.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3859" height="1557"><figcaption>Tetrachord for C and F Major</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-61.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="1194" height="263"><figcaption>Notes and intervals in the WWH tetrachord</figcaption></figure><p>Here are four other notes - <em>C, D, Eb, F</em> which form the <em>W-H-W</em> tetrachord which covers parts of the <em>Bb</em> and <em>Eb Major</em> scales.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-6.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="3862" height="1549"><figcaption>The WHW tetrachord</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-62.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="1111" height="263"></figure><p>I hope you get the point. If we go through the four tetrachord patterns: <em>WWH, WHW, HWW, WWW</em>, we would be covering major scales in the Circle of Fourths - all twelve keys. Here is an illustration.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/blog-fig-5a.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="1734" height="1172"></figure><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image-13.png" class="kg-image" alt="Scaling Up By Scaling Down" loading="lazy" width="1396" height="976"></figure><p>Here&apos;s a rudimentary example of improvising within this very limited range through a Circle of Fourths modulation.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/W6-6RbkK_yY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Improv example in C5-F5 range</figcaption></figure><h2 id="impact">Impact</h2><p>I hope by now it has become apparent that, even with this tiny range, it is possible to come up with contrived but yet fluent and expressive ideas that require little shifting and only a few patterns to master. This approach can help guitarists focus on music not technique, break out of the complex box-pattern thinking, and play fluently across complex harmonic changes.</p><p>Note: even though tetrachords contain four notes, this approach <em>does not</em> prescribe four-finger-playing as a necessity for these patterns. It may be more comfortable for some to instead use whichever fingers are dominant in their style. My own hands are just average and I have to shift the left hand often to hit certain notes. But I believe that guitarists who get comfortable playing at least up to a perfect fourth on the higher strings can more quickly open up their playing range across the guitar neck.</p><h2 id="scaling-up-again">Scaling Up Again</h2><p>As a next step, the pattern can be applied on the next string, B, within the same fret range. One will notice right away that the same tetrachord patterns will apply even though they will be shifted by a fourth. This expansion should feel natural quickly.</p><p>The third step is the most critical: Extend the tetrachord patterns across all the first two strings. Two strings are enough to play just about any melodic phrase. Two strings can also introduce contrapuntal elements much more gently and consistently. I have found this step to be an important period that requires systematic practice before scaling to other strings. </p><h3 id="detourgamifying-practice">Detour - Gamifying Practice</h3><hr><p>At this point in life, music is a hobby for me. I rarely have the time to practice seriously so I do the best I can to get around it by gamifying practice sessions whenever possible by using coding to hack together various automations. </p><p>Here&apos;s a python script (requires MacOs) that uses synthesized speech to call out random keys alongside a metronome click. The goals are</p><ul><li>play simple and on time</li><li>listen carefully and anticipate the next move</li><li>try keep a continuous flow of musical ideas </li></ul><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://github.com/georgetheka/kitchen-sink/blob/main/music/practice_keys.py"><div class="kg-bookmark-content"><div class="kg-bookmark-title">kitchen-sink/practice_keys.py at main &#xB7; georgetheka/kitchen-sink</div><div class="kg-bookmark-description">Random Utilities. Contribute to georgetheka/kitchen-sink development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Scaling Up By Scaling Down"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">georgetheka</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3a614f5550169a64cad2ddc1c5e0ce1f3502ba2979109b2e39416fc1003583ec/georgetheka/kitchen-sink" alt="Scaling Up By Scaling Down"></div></a><figcaption>Python Script for practicing key changes</figcaption></figure><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/-Xg_FwrwKKI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Improvising through randomly moving keys</figcaption></figure><hr><p>And finally, I recorded a quick freestyle improv that uses these tetrachord patterns, particularly focused on the higher strings.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/Ms57aJU5nGc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>I hope this was helpful in some way and I look forward to your feedback!</p>]]></content:encoded></item><item><title><![CDATA[Kitara]]></title><description><![CDATA[Building a program converts any MIDI guitar into a customizable computer keyboard]]></description><link>https://georgetheka.com/guitar-to-work/</link><guid isPermaLink="false">6057f2c3f357351b3208d8ba</guid><category><![CDATA[guitar]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Sun, 29 Nov 2020 05:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2021/08/image--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2021/08/image--1-.png" alt="Kitara"><p>After putting up the Christmas tree early with the kids, I spent a few hours this Sunday afternoon on another geeky techno-music experiment: building a program converts any MIDI guitar into a customizable computer keyboard.<br><br>Although getting this to work coding-wise was not difficult, the hard part was figuring out how to map a guitar to a keyboard. After many tries I found the right way: split the keyboard and flip the left side upside down. At that point it becomes a regular QWERTY keyboard and I started typing immediately and with almost no practice.</p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/image.png" class="kg-image" alt="Kitara" loading="lazy" width="2346" height="450"></figure><p>The code and the mapping are available here: <a href="https://github.com/georgetheka/kitara">https://github.com/georgetheka/kitara</a></p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/0iuojEyKKMc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item><item><title><![CDATA[The 627]]></title><description><![CDATA[Converting a regular six string guitar into a hybrid, seven-string.]]></description><link>https://georgetheka.com/627/</link><guid isPermaLink="false">6057f2c3f357351b3208d8b8</guid><category><![CDATA[guitar]]></category><category><![CDATA[electronics]]></category><category><![CDATA[music]]></category><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Sat, 25 Jul 2020 14:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2021/08/guitar-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://georgetheka.com/content/images/2021/08/guitar-1.png" alt="The 627"><p>Converting a regular six string guitar into a hybrid, seven-string.</p><p>Long story short: my interest in playing a hybrid guitar (bass + guitar as one instrument) got renewed during the beginning of the pandemic when playing with people in NYC was literally a choice between life and death. </p><p>Although, I have experimented in the past with 8-string, and 7-string guitars and various custom modifications, I never found myself falling in love with them. Adjusting to a different string set and tuning wasn&apos;t the issue. More strings meant many new possibilities but also a loss of expressiveness and fluency with a musical vocabulary built through the years. It felt like learning a new language, and any language takes many years to master.</p><p>So, I started thinking creatively. I sketched out what I wanted: a regular guitar that allowed me to play an emulated bass and guitar combo and a comfortable range in each instrument. How would that even work? Well, if it meant that some of the ranges overlap somewhere in the middle -- that&apos;s okay -- many melody lines in that shared range are often played in unison anyway. It&apos;s a good compromise.</p><p>Once I came to terms with that approach, the experimentation began. I designed a simple electronic circuit that utilizes a Seymour Duncan Duckbucker specialized pickup to allow signal splitting into two ranges of three strings each. This circuit has only one volume control via a 500K pot for a darker, warmer sound, its own 1/4&quot; jack and the entire unit is grounded separately. This signal is sent through a bass synth petal and other effects in order to emulate a 3-stringed bass.</p><p>Then, I designed a second, more traditional, circuit that treats the upper range of the Duckbucker as a single-coil pickup. A 250K push-pull pot, combined with a &#xA0;classical 5-way switch allows switching between humbuckers and single coil modes, therefore allowing for a wide range of sounds. A single tone control via a 250K pot and a 0.022&#xB5;F capacitor for brighter sound is wired to apply to all three pickups. </p><p>Removing one of the two usual Stratocaster tone controls was necessary because I needed the existing space to accommodate both circuits under the same pick-guard without having to carve out parts of the guitar body. I ended up drilling only one additional whole in the pick-guard only, and although the controls are a little crammed, they&apos;re easy to handle. Besides, I think they look cool that way.</p><p>Below is a schematic of the circuits used for v1.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/thekio-627-1.png" class="kg-image" alt="The 627" loading="lazy" width="1264" height="1148"><figcaption>Electronics for the 627</figcaption></figure><p>The new version 1.0 of the electronics wired and passing the tests before assembly.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/IMG_1563--1-.jpeg" class="kg-image" alt="The 627" loading="lazy" width="2000" height="2667"><figcaption>627 wiring</figcaption></figure><p>And finally the first test drive. </p><figure class="kg-card kg-image-card"><img src="https://georgetheka.com/content/images/2021/08/IMG_1570--1-.jpeg" class="kg-image" alt="The 627" loading="lazy" width="2000" height="2667"></figure><p>Lastly, I trimmed two pieces of stainless steel and placed:</p><ul><li>the first on top of the two bridge humbucker magnets in order to attenuate the pickup signal signal for the E and A strings and create a sound separation between the bass and guitar strings.</li><li>the second (optional one) covering the higher four strings of the middle pickup to absorb any left over bleeding from these strings in the bass signal</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/guitar1--1-.jpeg" class="kg-image" alt="The 627" loading="lazy" width="2000" height="2667"><figcaption>Using an angle grinder I cut a couple of stainless steel pieces from an old door latch</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/guitar2--1-.jpeg" class="kg-image" alt="The 627" loading="lazy" width="2000" height="2667"><figcaption>Finished sharp edges with a filing disk</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://georgetheka.com/content/images/2021/08/guitar3--1-.jpeg" class="kg-image" alt="The 627" loading="lazy" width="2000" height="2667"><figcaption>Metal pieces are held in place by pickup magnetic field</figcaption></figure><p>This setup allows playing in hybrid+overlapping mode: 3 strings of bass through the middle pickup lower range and its dedicated circuit, and 4 strings of guitar through the unattenuated bridge humbucker. Hence the name: 627 (six to seven). The overlap is the intentional compromise and it enables expressiveness on both instruments. </p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/MIWjh7Tg3hk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>And with foot drums...</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/fkb523FGNok?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Revolving Tech Debt]]></title><description><![CDATA[Avoiding debt is hard, nearly impossible. It is even harder to get rid of it. But the hardest thing is figuring out the how. Just exactly how do you get rid of tech debt? And would you want to?]]></description><link>https://georgetheka.com/revolving-tech-debt/</link><guid isPermaLink="false">6057d944f357351b3208d877</guid><dc:creator><![CDATA[George Theka]]></dc:creator><pubDate>Wed, 23 Jan 2019 00:00:00 GMT</pubDate><media:content url="https://georgetheka.com/content/images/2021/03/tech-debt.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://georgetheka.com/content/images/2021/03/tech-debt.jpeg" alt="Revolving Tech Debt"><p><em>Cover photo credits: <a href="https://twitter.com/ExpLatinAmerica">@ExpLatinAmerica</a>. Learn more <a href="https://www.facebook.com/experimentandolatinoamerica">here</a></em>.</p>
<!--kg-card-end: markdown--><p>Avoiding debt is hard, nearly impossible. It is even harder to get rid of it. But the hardest thing is figuring out the how. <em>Just exactly how do you get rid of tech debt? And would you want to?</em></p><p>Here are some loosing strategies:</p><p><strong>Strategy #1: Dedicate 20% of the sprint to tech debt.</strong></p><p>Why it fails: besides the fact that humans are terrible at working in percentages in a multiplex environment, that 20% will be the priority battleground between engineering and stakeholders. How do you respond to<em> &quot;...tech debt can wait. If we don&apos;t get this out by next week, might as well there be no product...&quot;?</em> Well the rational answer is you can&apos;t. Feature wins, tech debt looses. You would think this will change in a bigger team. Nope. Bigger teams do have more capacity but they are also more expensive and the stakes are higher.</p><p><strong>Strategy #2: Let&apos;s sneak in most of it on feature estimations. Nobody will notice.</strong></p><p>Why it fails: Yeah. Nobody dumb enough. While this is a great strategy for the most basic kind of tech debt, it won&apos;t scale with for paying off large chunks of it. It will artificially inflate each estimation and the team will just fail to deliver critical stuff on time. Tech debt will ultimately become feature debt. And if there are multiple teams, velocity comparisons will quickly unravel that dirty magic trick.</p><p><strong>Strategy #3: The Revolt - after X many months of accumulating lots of technical debt the team finally looses it: &quot;That&apos;s it. We&apos;re going to force some tech debt into the next sprint and no one can stop us!!!&quot;</strong></p><p>Why it fails: setting aside the fact that revolts are risky business, even if they work, having uneven value distribution being delivered over time will quickly increase an organization&apos;s risk in the marketplace. It might feel like an internal victory but in a tight market race hitting the brakes on shipping continuous value for a moment is all it takes for your competitors to quickly get ahead. It can all happen in a blink of an eye.</p><h2 id="so-what-is-tech-debt-anyway">So what is tech debt anyway?</h2><p>Technical debt describes all shortcuts a team takes to achieve a goal in the short term making it difficult for the team -- and ultimately the business -- to succeed in the long term.</p><p>I like to think of it as using a credit card: <em>you can swipe it to buy whatever you want but at some point you are going to have to pay it back. </em>And just like a credit card, tech debt earns compound interest over time with a pretty high interest rate (consider how quickly code goes stale today).</p><p>So, how should you pay it off? </p><p><em>A. Transactor: </em>You can pay it all back as soon as possible, but you may end up with no liquidity to pay for other things. [no teams ever do this although many want to do exactly this]</p><p><em>B. Delinquent: </em>You can pay only the bare minimum regularly, but in the long term you may become delinquent and default. [many teams end up unintentionally doing this]</p><p><em>C. Revolver: </em>You can pay an amount less than balance due such that you have enough cash in hand but large enough so you avoid incurring too much interest. [more teams should be doing this]</p><p>I believe the correct answer is C. Option A means a team cannot react to market demand if it is too busy paying off debt, and option B means there is a looming default event for this particular product or system in the future that an org may not see coming until it is too late.</p><p>But the biggest advantage of option C is that when combined with strong planning and feedback loops, it can identify opportunity windows to sometimes <u>pay back a lot of the debt quickly</u>, particularly during a moment when the market pressure is low. Strong feedback/planning can also identify opportunities to write off tech debt. When an org decides to discontinue &#xA0;products or features that are no longer enabling the business, the related tech debt also becomes void. Goodbye.</p><h2 id="a-strategy-that-works">A strategy that works</h2><p>A strategy that works well is a multi-faceted one with at least three dimensions:</p><ul><li>Definition of Done</li><li>End-of-Life Projection</li><li>Engineering as Stakeholders</li></ul><p><strong>1. Definition of Done</strong><br>An organization is better served if some aspects of tech debt are rolled into the Definition of Done for how product is shipped. The following should not be categorized as technical debt but instead they should become part of the shipping requirements for any feature:</p><ul><li><em>Unit Test Coverage</em>: this may not make much of a dent in shipped product quality but it is the only way for a team to collectively maintain and refactor code over time without significant risk.</li><li><em>Critical Path Integration Tests</em>: these will definitely put strong assurances on shipped quality.</li><li><em>Observability</em>: logging and telemetry are as critical as your windshield, mirrors, and dashboard are to driving a car. Why bird-box your driving?</li><li><em>Deployment Configuration</em>: doing this right from the beginning significantly reduces downtime risks due to configuration changes down the road. There are no net gains when short-circuiting this process.</li></ul><p><strong>2. End-of-Life Projection</strong><br>The most common mistake teams make is not computing and communicating the capacity/occupancy limits for a shipped feature that was delivered using shortcuts and incurring tech debt. Business stakeholders are often not aware that a shipped feature has scale limitations and, depending on the growth rate, those <em>limitations are already setting a life expectancy</em> for that feature. </p><p>Set and communicate capacity and end-of-life expectations clearly from the beginning and one of two things is likely to happen:</p><ul><li>stakeholders could be open to invest in the more scalable solution which could mean current tech debt becomes regular feature development.</li><li>or, they could be fine with the life expectancy for that feature for the short term, but may want to regroup in the future to discuss redeveloping this feature for the next level of scale.</li></ul><p>In the first case tech debt is not erased but <em>significantly reduced</em>. In the second case tech debt is <em>not recorded</em> as such as there is a clear end-of-life for this feature. Instead, tech debt could become the roadmap for the v2.0 of this feature.</p><p><strong>3. Engineering as stakeholders</strong><br>Consider what engineering teams typically work on during a sprint: a todo list prioritized by a product owner. The product owner has come up with that list by negotiating with several or many stakeholders: people or teams who have a stake in the code being shipped so they can advance one of their business goals, for the good of the organization.</p><p>Teams...needing to ship code....to advance....their goals....for the good of the organization...you say? That is exactly what engineering teams are trying to do when wanting to pay off tech debt.</p><p>What if, just what if:</p><p>Engineering teams take on a stakeholder role during planning and prioritization round table. Just like any other team, they will need to make their case to get sprint time for tech debt items they believe will advance their team goals for the greater good of the business.</p><p> They no longer have to find ways to justify doing the right thing for their organization or hide their incredibly valuable work under other features. </p><p><em>Tech debt becomes just regular feature delivery and shipping comes with the same recognitions and celebrations as any other feature that advances goals of any other team in the organization.</em></p><p>What&apos;s even more valuable, this approach provides a two-way checkpoint for the team: having to explain the value of tech debt to a non-engineering audience not only helps develop strong cross-team communications but it can also help engineering double-check their own thinking and avoid paying off tech debt that does not advance any business goals: &quot;Is this tech debt really valuable or just a cool thing I am itching to work on?&quot;</p><h2 id="tech-debt-utilization">Tech Debt Utilization</h2><p>Just like credit card consumers are advised to not exceed the ~30% credit utilization limit in order to build and maintain their credit, it is similarly a good idea to limit tech debt to around 25%-30% of the total points of the backlog. In the credit card example, credit utilization is the ratio of the current balance due over the credit limit and that can be translated to tech debt as:</p><p><code>Tech Debt Utilization = (Tech Debt Balance / Total Backlog Balance) x 100% </code></p><p>However, in order to compute the balance, it is critically important to first record all new tech debt correctly. This can be done by creating and estimating tech debt PBIs (product backlog items) for each shortcut the team takes, just like a credit card balance is augmented with each purchase during a merchant transaction.</p>]]></content:encoded></item></channel></rss>