Back to Blog
News

How one IBM architect actually uses AI coding tools day to day

adminDatabase Expert
February 11, 2026
5 min read
#Artificial Intelligence#DevOps
How one IBM architect actually uses AI coding tools day to day
How one IBM architect actually uses AI coding tools day to day - Image 2
How one IBM architect actually uses AI coding tools day to day - Image 3

Gabe Goodhartruns three coding projects simultaneously, with AI assistants doing most of the heavy lifting. But there’s one thing he’ll never delegate: hitting the commit button.Goodhart, the Chief Architect of AI Open Innovation at IBM, has woven multiple AI tools into nearly every aspect of his development workflow.Claude CodeandBobshellwrite his code.Open WebUIhandles research.ContextForgemanages the tools that feed data to AI agents. He even prompts these systems to enter what he calls “YOLO” (‘you only live once’) mode, giving them full autonomy to generate entire codebases while he works on something else. But when it comes time to commit changes to version control, the moment when code is formally accepted into a project’s history, Goodhart keeps that decision for himself.“I always keep myself as the gatekeeper of git commit so that I can own the checkpointing and force myself to take ownership over what I consider ‘done,’” Goodhart toldIBM Thinkin an interview. “This is a general recommendation, but I strongly believe that every developer should know git intimately: view the graph, make atomic commits with meaningful chunks of work, use good commit messages, use branching.”That constraint is just one example of how developers are figuring out where to draw the line between human and machine work. Goodhart has clearly thought long and hard about what to automate and what to keep.His daily toolkit includes multiple AI assistants that he switches between based on the task. He prefers Bobshell (a command-line coding assistant) over IDE plugins—“but that’s just me,” he said. He runs multiple instances of Open WebUI, a Perplexity-style search interface: one with local models and another with larger models on his GPU development box. As he’s gotten into building his own AI agents, he’s added ContextForge to explore and manage MCP servers, or tools that let AI agents access external data sources.“So many!” he said when asked which AI tool had unexpectedly become part of his routine, before listing his daily drivers. Goodhart’s setup reflects abroader shiftunderway in software development. As AI tools take on more of the mechanical work of writing and testing code, developers are being forced to make a different set of decisions: not just what to build, but what to automate, what to oversee, and where to draw the line between delegation and control. What follows is a look inside Goodhart’s daily stack and how those choices play out in practice.

How Goodhart uses these tools depends entirely on what he’s building. He toldIBM Thinkthat his workflow breaks into three distinct patterns, each with different levels of AI autonomy.For greenfield coding (new scripts or entirely new projects), he goes all-in on automation. He writes a textual description of the project, including high-level capabilities and specific constraints like licensing requirements and testing practices, then dumps that description into a text file in the repository and feeds it into the initial prompt. Next comes the ideation phase, with Goodhart working through the plan iteratively with the AI. Once he and the AI are aligned on the approach, he gives the tool “pretty much YOLO mode within the current project repo,” then steps away.“I create a single wholesale commit with the initial code (unedited), then go piece by piece and ‘chisel’ it out, either manually or with targeted prompting,” he explained.Small feature work or debugging follows a tighter script. He fires up the AI with a short description of the issue, any thoughts on where the problem might be in the code and a link to the open issue on GitHub if one exists. He asks it to first produce a one-off script to reproduce the issue, if necessary, then iterates until he is convinced the issue is actually reproduced “and that the AI didn’t just claim success.”At this stage, he monitors change requests carefully. No YOLO here. He makes sure to give the AI clear instructions on how to run tests. Wash, rinse, repeat until the bug is fixed.Big features or complex debugging work lands somewhere between these two extremes. Goodhart said he invests more effort up front, including any progress he’s made thinking through the issue himself and “as many external reference links as I can find: other PRs, similar features in other projects, online discussions.” He iterates closely with the agent, without the hands-off autonomy he grants to greenfield work.As Goodhart describes it, the pattern is simple. The less defined the problem space, the more freedom he gives the AI. The more critical the integration point, or the higher the risk of subtle bugs, the tighter he keeps the reins.

According to Goodhart, the most unexpected shift in his workflow hasn’t been in the code itself, but in how he approaches problems before any code gets written.“All of my usage patterns have started to focus on actually writing down my thinking in a clear way, with as many references as possible,” he said. “In the past, I’d keep all of this in my head as I was working. But now, I basically go through the exercise of doing it all in a random text file, then using that as the bootstrapping prompt.”Articulating requirements clearly enough for an AI to execute them requires upfront clarity that internal mental models often don’t, Goodhart said. You can’t handwave through the fuzzy parts when you’re writing a prompt, he explained. You can’t rely on tacit knowledge that only makes sense inside your own head.But Goodhart is candid about the downsides of AI use. “Offloading the actual code-digging has made it a bit easier to get distracted by multitasking and harder to keep the context for a given problem loaded since I’m often working on three things at once (and responding to Slack, etc.).”Ultimately, the tool that was supposed to help him focus has engendered more fragmentation. When the AI is doing the detail work, it’s easier to context-switch. Three concurrent threads, and none of them are getting deep sustained attention.

Comments (0)