<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Agents - Marvin Beckers</title>
    <link>https://marvin.beckers.dev/tags/agents/</link>
    <description>Agents - Marvin Beckers</description>
    <generator>Hugo - gohugo.io</generator>
    <language>en-us</language>
    <copyright>Marvin Beckers, 2020-2026</copyright>
    <lastBuildDate>Sun, 05 Apr 2026 17:00:00 +0000</lastBuildDate>
    
	<atom:link href="https://marvin.beckers.dev/tags/agents/index.xml" rel="self" type="application/rss+xml" />
    
    
    
    <item>
      <title>Don&#39;t Yell at Your LLM</title>
      <link>https://marvin.beckers.dev/blog/dont-yell-at-your-llm/</link>
      <pubDate>Sun, 05 Apr 2026 17:00:00 +0000</pubDate>
      
      <guid>https://marvin.beckers.dev/blog/dont-yell-at-your-llm/</guid>
      <description>&lt;p&gt;Maybe not surprisingly, the science of how to extract maximum value from an LLM&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;Large Language Model, a computer program trained on large amounts of human language; Used for coding agents, for example. A coding agent is a program that uses a LLM to write and execute code.&lt;/span&gt; is an imprecise one. There is much advice floating around in the industry, but perhaps the most obvious one (at least to me) is &amp;ldquo;be kind to your LLM&amp;rdquo;. Not because LLMs have feelings, they don&amp;rsquo;t. But language encodes emotion. And human interactions recorded in written language are emotional to the very core.&lt;/p&gt;
&lt;p&gt;And while the LLM doesn&amp;rsquo;t &amp;ldquo;understand&amp;rdquo; these emotions, it seems somewhat logical that being mean will result in worse results. That&amp;rsquo;s a fairly basic trait of human interaction! If you yell at someone, their answer is not going to be of better quality&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;A shock to choleric managers around the planet, I know.&lt;/span&gt;. Quite the opposite, in fact. Humans under pressure tend to produce &lt;em&gt;something&lt;/em&gt;, but it often is somewhat useless. People tend to be more helpful when being appreciated.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s simply more probable that the interaction with your coding agent will go sideways if you&amp;rsquo;re not nice, because the dataset (humans interacting with each other) points in that direction. In a game of probability&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;Perhaps *the* characteristic trait of LLMs, after all.&lt;/span&gt;, ignoring this tendency seems like an unnecessary risk.&lt;/p&gt;
&lt;p&gt;Remember the startup founder that got their production database dropped by an agent &lt;a href=&#34;https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/&#34;&gt;last year&lt;/a&gt;? This particular founder was seemingly yelling&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;Perhaps understandably; Mid of 2025 was not a great time to get useful LLM coding output. That one is kind of on him though.&lt;/span&gt; at their agent for making mistakes &amp;ndash; overall treating the agent a bit like a sub-par intern. &lt;a href=&#34;https://xcancel.com/jasonlk/status/1944586096538714537#m&#34;&gt;Here&lt;/a&gt; is an example of what I mean. And what do stressed out interns do if yelled at? They make more mistakes, like dropping the production database. It was the most likeliest thing to happen in human interactions shaped this particular way, so the agent did it.&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;Of course, this vastly oversimplifies what happened. But that&amp;#39;s the thing with LLMs, right? No one really knows why they do something in particular.&lt;/span&gt; Oops.&lt;/p&gt;
&lt;p&gt;Steve Klabnik expressed a similar sentiment &lt;a href=&#34;https://steveklabnik.com/writing/getting-started-with-claude-for-software-development/#intentionality&#34;&gt;earlier this year&lt;/a&gt;, so I&amp;rsquo;m not alone with this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;But I do think that the attitude you bring towards this process partially dictates your success, and I think you should be conscious of that while you go on this journey.&lt;/p&gt;
&lt;p&gt;Is that too woo-y for you? Okay, let me make it concrete: I un-ironically believe that swearing at Claude makes it perform worse.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Maybe it&amp;rsquo;s obvious to most people, and that&amp;rsquo;s why we don&amp;rsquo;t really talk about it. Hard to say, yet I&amp;rsquo;ve seen LLM interactions I would categorize as &amp;ldquo;hostile&amp;rdquo; and &amp;ldquo;frustrated&amp;rdquo;. Yes, it&amp;rsquo;s a program running advanced math on a computer, neither have any feelings, but it&amp;rsquo;s not productive to express your negative emotions to it either.&lt;/p&gt;
&lt;p&gt;AI labs seem to be hard at work to eliminate this &amp;ldquo;weakness&amp;rdquo;, possibly because people tend to swear at their models and they&amp;rsquo;re aware of this. You can see that some LLMs respond very differently when put under stress these days. But can they iron out a fundamental human trait, which heavily shapes language itself? I personally have my doubts about that.&lt;/p&gt;
&lt;p&gt;The gist here is: It&amp;rsquo;s not a good idea to yell at the junior engineer that did something wrong, and the same roughly applies to your LLM&lt;label for=&#34;sn-buthow&#34; class=&#34;margin-toggle sidenote-number&#34;&gt;&lt;/label&gt;&lt;input type=&#34;checkbox&#34; id=&#34;sn-buthow&#34; class=&#34;margin-toggle&#34;&gt;&lt;span class=&#34;sidenote&#34;&gt;It&amp;#39;s a bit ironic that social skills become even more important when you&amp;#39;re no longer working with humans, but computers, but here we are with this particular flavor of AI.&lt;/span&gt;. The difference is that the junior engineer likely learns from screwing up if you explain &amp;ndash; with patience &amp;ndash; what went wrong, but the LLM might not, as soon as your explanation leaves the context window.&lt;/p&gt;
</description>
    </item>
    
    
  </channel>
</rss>

