<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AIPack Substack]]></title><description><![CDATA[News, insights, and applications of Agentic Runtimes and AIPack.]]></description><link>https://news.aipack.ai</link><generator>Substack</generator><lastBuildDate>Tue, 21 Apr 2026 10:31:19 GMT</lastBuildDate><atom:link href="https://news.aipack.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jeremy Chone]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aipackai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aipackai@substack.com]]></itunes:email><itunes:name><![CDATA[Jeremy Chone]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jeremy Chone]]></itunes:author><googleplay:owner><![CDATA[aipackai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aipackai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jeremy Chone]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[pro@coder v0.5.2 Demo - Workbench]]></title><description><![CDATA[Just released v0.5.2 with new Workbench support]]></description><link>https://news.aipack.ai/p/procoder-v052-demo-workbench</link><guid isPermaLink="false">https://news.aipack.ai/p/procoder-v052-demo-workbench</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Sat, 11 Apr 2026 15:01:58 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193859162/7c9197878cff851833d40dc8f6115b40.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>(See on YouTube as <a href="https://youtu.be/N41FKZ6XA9s">unlisted video</a>)</p><p>Here is a video introducing the latest <code>pro@coder</code> v0.5.2 release with the new <strong>Workbench</strong> concept.</p><p>Also, more features were released, see the release notes below.</p><h4><code>pro@coder</code> - <strong>v0.5.2</strong> - Release notes</h4><p>Install/Upgrade: <code>aip install pro@coder</code></p><p>Rename your <code>coder-prompt.md</code> to <code>coder-prompt-old.md</code></p><p>Major change: doubling down on <strong>workbench</strong> rather than <strong>dev</strong>, and support for <strong>spec</strong></p><pre><code><code># Default dir: .aipack/.prompt/pro@coder/workbench-default 
workbench:
  # dir:  .aipack/.prompt/pro@coder/workbench-default
  # chat: true
  # spec: true
  # plan: true
</code></code></pre><p>New/Changed:</p><ul><li><p>Now full <code>workbench:</code> mode, replacing the old <code>dev:</code></p></li><li><p>By default, it will create <code>.aipack/.prompt/pro@coder/workbench-default/</code></p></li><li><p>But you can set any <code>dir: _workbench/cool-feature/</code>, and the <code>chat.md</code>, <code>spec.md</code>, and <code>plan.md</code> files will be created in this folder, as set to <code>: true</code></p></li><li><p>Now <code>chat.md</code> instead of <code>dev-chat.md</code></p></li><li><p>Now a single <code>plan.md</code> instead of <code>plan-1</code>, etc. Still has the <code>_plan-rules.md</code></p></li><li><p>Now supports spec, with <code>spec.md</code> and <code>_spec-rules.md</code></p></li><li><p>The sub-agent can now be on the coder end event with <code>on: ["start", "end"]</code></p></li><li><p><code>structure_globs:</code> is not generated by default in the <code>coder-prompt.md</code>, but it is still supported</p></li></ul><p>See the readme for more info: https://github.com/aipack-ai/packs-pro/blob/main/pro/coder/README.md</p><p>The readme is also available under <code>.aipack/.prompt/pro@coder/README.md</code>, and you can activate it in <code>knowledge_globs:</code></p>]]></content:encoded></item><item><title><![CDATA[GPT 5.4 Mini vs. Flash 3, which model actually picks the right coding context?]]></title><description><![CDATA[A real production benchmark on auto-context selection, comparing speed, cost, and file-picking quality before the main coding step between gpt 5.4 mini and google gemini flash 3]]></description><link>https://news.aipack.ai/p/gpt-54-mini-vs-flash-3-which-model</link><guid isPermaLink="false">https://news.aipack.ai/p/gpt-54-mini-vs-flash-3-which-model</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Sat, 21 Mar 2026 16:30:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0vkJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>TL;DR: Flash 3 picks better coding context than GPT-5.4 Mini on medium tasks; Mini-High narrows the gap.</p></blockquote><p>In our production coding workflow, we use the LISP model:</p><p>Lens &#10140; Index &#10140; Select &#10140; Perform</p><ul><li><p>Lens (globs) &#10140; Human defines what the model can see</p></li><li><p>Index (code-map) &#10140; AI maps the lensed material, for each file: summary, when to use, public types, public functions</p></li><li><p>Select (auto-context) &#10140; AI picks the relevant files for the prompt</p></li><li><p>Perform (main work) &#10140; AI executes with precise context</p></li></ul><p>The auto-context stage takes the user request plus project-level summaries and decides which files the final coding model should receive. That decision has a direct impact on output quality. If the wrong files are selected, the downstream coding step starts with the wrong context, or with too much context, even if the execution model itself is strong.</p><blockquote><p>Note: AIPack v0.8.20 has been released with the new GPT 5.4 mini/nano aliases and updated pricing info. Release note: https://substack.com/@jeremychone/note/c-230413138</p></blockquote><h2>The test</h2><p>We ran a real auto-context selection task for production coding.</p><p>The model gets:</p><ul><li><p>A code map file for each source path, with summary, when_to_use, public_types, and public_functions</p></li><li><p>The user prompt</p></li><li><p>An instruction to select the appropriate paths for that prompt</p></li></ul><p>In this case, the available file set is about 70 files and 8k LOC, so it is relatively small.</p><p>This is not an extreme retrieval problem. It is a medium-complexity selection task, the kind of step many coding agents and developer workflows need to get right consistently.</p><h2>Test setup</h2><p>We ran the same auto-context task across several model settings:</p><ul><li><p><code>mini</code></p></li><li><p><code>mini-medium</code></p></li><li><p><code>mini-high</code></p></li><li><p><code>flash</code></p></li></ul><p>Each run used the same prompt, the same code map, and the same context files.</p><p>The goal was simple: measure the tradeoff between speed, cost, and selection quality for a real auto-context step in a production coding workflow.</p><p>The config with <code>pro@coder</code> is as follows:</p><pre><code><code>context_globs:
  - "*.*"
  - src/**/*.*
  - doc/**/*.*
  - dev/spec/*.*
  - tests/**/*.*
  - "!tests/.out/**/*.*" 
  
auto_context: 
  model: mini-medium     
  input_concurrency: 32

dev:
  chat: true 
  plan: false

model: mini
</code></code></pre><p>And the prompt is:</p><pre><code><code>Do not write any code file, just read and update dev chat. 

tell me what is missing on the new doc for llm. We added quite a bit the last few weeks.
</code></code></pre><p>We changed <code>auto_context.model</code> across <code>mini</code>, <code>mini-medium</code>, <code>mini-high</code>, and <code>flash</code>, cleaning the dev chat for each iteration.</p><h2>Results</h2><p>Here are the results for the task, from worst to best:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0vkJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0vkJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 424w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 848w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 1272w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0vkJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png" width="1419" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1419,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:208153,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://news.aipack.ai/i/191628643?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0vkJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 424w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 848w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 1272w, https://substackcdn.com/image/fetch/$s_!0vkJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33761160-8041-4027-a9b6-855f0bd4719d_1419x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Note: This is based on a few runs for this particular use case. In practice, medium-complexity tasks like this can still provide a useful relative benchmark.</p></blockquote><h2>What the test shows</h2><p>For this specific auto-context task:</p><ul><li><p><code>flash</code> produced the best overall result</p></li><li><p><code>mini</code> was extremely fast, but the quality was too low for this task</p></li><li><p><code>mini-medium</code> improved on <code>mini</code>, but was still not reliable enough</p></li><li><p><code>mini-high</code> got much closer on quality, but with a significant latency cost</p></li></ul><p>The quality gap shows up quickly on this kind of medium-complexity context selection task.</p><h2>Practical takeaway</h2><p>If your workflow depends on selecting the right files before the main coding step, this is the kind of benchmark worth tracking.</p><p>Auto-context is one of those places where a small quality gap can create a much larger downstream problem. A model that is fast but selects the wrong files does not really save time.</p><p>In this test, <code>flash</code> gave the best balance for the task.</p><p>We will keep running more variations on the same benchmark and share the results.</p>]]></content:encoded></item><item><title><![CDATA[pro@coder - Static Image ➜ Interactive Chart (with Opus 4.5)]]></title><description><![CDATA[With the NEW AIPack and pro@coder ATTACHMENT support, we can now include images in our prompt by just copying and pasting them into our prompt and asking AI to code & tune User Interfaces.]]></description><link>https://news.aipack.ai/p/procoder-static-image-interactive-with-opus-4-5</link><guid isPermaLink="false">https://news.aipack.ai/p/procoder-static-image-interactive-with-opus-4-5</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Sun, 30 Nov 2025 16:02:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/180296363/54b76df41a21ce4a926b533d3797f3eb.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this video, we are going to learn how to use the new attachment support of <code>pro@coder</code> and turn a static image into an interactive chart (using Anthropic Opus 4.5, which is awesome). </p><p>Video also available on <a href="https://youtu.be/k6SNjzIcd8g">YouTube</a> (better experience on Mobile) </p><p>See below for the asset and prompts.</p><p>Image reference: <a href="https://www.vecteezy.com/vector-art/4063019-nice-graph-design">https://www.vecteezy.com/vector-art/4063019-nice-graph-design</a><br>(from <a href="https://www.vecteezy.com/members/grmarcstock">Gerardo Giuseppe Ramos Granada</a>)</p><p>Key Points for <strong><a href="https://www.linkedin.com/search/results/all/?keywords=%23productioncoding&amp;origin=HASH_TAG_FROM_FEED">#ProductionCoding</a></strong> with AI<br><br>&#9989; Workflows matter more than tools and models. <br><br>&#9989; File-based for scale (e.g., do not code in a chat box)<br><br>&#9989; Use tools that allow you to OWN your context. <br><br>The last point is the most important, <br><br>&#128073; For maximum AI coding quality, use CONTEXT LENSING. <br><br>Context lensing is about having tight control of what we want the AI to see and not see.</p><h2><strong>Step 1 - Initial</strong></h2><p>In the <code>pro@coder</code> prompt, paste the image and use this prompt.</p><pre><code><code>In the web-content/ folder, implement index.html to display the chart shown in the image. 
- Use canvas2d
- Follow the same dot positions as the image
- Chart should take up 2/3 of the page
- Resize with the page and be centered.

![chart](image.png)
</code></code></pre><h2><strong>Step 2 - Add toolbar and toggle</strong></h2><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JQxJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JQxJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 424w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 848w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 1272w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JQxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png" width="560" height="423.5492957746479" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:537,&quot;width&quot;:710,&quot;resizeWidth&quot;:560,&quot;bytes&quot;:45366,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://news.aipack.ai/i/180296363?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JQxJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 424w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 848w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 1272w, https://substackcdn.com/image/fetch/$s_!JQxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb5e7bb4-9676-40cf-8895-bd16264e7be1_710x537.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><pre><code><code>Implement the toggle toolbar as shown in the image, and implement the bar view.

![chart-with-toggle](image-1.png)
</code></code></pre><h2><strong>Step 3 - Fix the UI</strong></h2><pre><code><code>Position the toolbar so that it overlaps with the top line of the chart and make it larger, as shown in the `chart-with-toggle-reference` image. 
Look at the `actual-chart-implemented` image to see how it currently looks.

![actual-chart-implemented](image-2.png)

![chart-with-toggle-reference](image-1.png)
</code></code></pre><h2><strong>Step 4 - Number cheating</strong></h2><pre><code><code>Now, when in line mode, allow the user to drag the dot vertically and keep the lines connected to it.#AIProductionCoding</code></code></pre><p>#VibeCoding to #AIProductionCoding</p>]]></content:encoded></item><item><title><![CDATA[From Vibe Coding to Pro Coding: A Plan-Based pro@coder Demo]]></title><description><![CDATA[A simple example showing how to use best practices to upgrade a Vibe Coding example to a professional coding workflow with the pro@coder AI Pack.]]></description><link>https://news.aipack.ai/p/vibe-coding-to-pro-coder-plan-based</link><guid isPermaLink="false">https://news.aipack.ai/p/vibe-coding-to-pro-coder-plan-based</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Tue, 28 Oct 2025 14:01:29 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177334645/58512f543f7fd144c7d66deb8af66dc6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><em>Click <a href="https://www.youtube.com/watch?v=xgOxXCuMtYE">here</a> for <a href="https://www.youtube.com/watch?v=xgOxXCuMtYE">YouTube version </a>(Better on Mobile)</em><br><br>Here is a <code>pro@coder</code> demo showing how to apply production coding best practices using a plan-based development approach.</p><p>Prerequisite</p><ul><li><p>Install AIPack (to get the <code>aip</code> command line): See <a href="https://aipack.ai">https://aipack.ai</a></p></li><li><p>Recommended: Create a new folder and initialize a Git repository with a <code>.gitignore</code> file that ignores <code>.aipack/</code> and other build files (best practices)</p></li><li><p>Recommended: Open this folder in your IDE (e.g., VSCode, Cursor, etc.) and open an integrated terminal</p></li><li><p>Install <code>pro@coder</code> with <code>aip install pro@coder</code></p></li></ul><p>Prompt used in this video:</p><h2><strong>First Prompt - Create initial index.html</strong></h2><pre><code><code>First, implement the following. No plan is needed for this task.

Under the `web-content/` folder, implement a single `index.html` file featuring a particle triangle. This triangle should explode on the first click and then reassemble on the second click.

- Use Canvas2D.
- The canvas should resize with the screen, using a 16px margin.
- Triangle:
    - The triangle should be centered and point upward.
    - The triangle must occupy about two-thirds of the canvas area.
    - Triangle Particles must be 8px by 8px, spaced out by 4px.
    - Use a dark background and blue particles.
- Explosion:
    - The explosion should use `sin easeOutQuint` easing, originate from the center, move outward, and stop 80% of the way and hold this state until the next click triggers reassembly.
    - The explosion should look organic.
- Reassembly: 
    - The reassembly animation should use `easeOutElastic` easing.
    - When reassembled, it should return to the initial state.
- Other requirements: 
    - Ensure that clicks can be registered even during the animation.


</code></code></pre><h2><strong>Second Prompt - Ask to create the prompt</strong></h2><pre><code><code>Following the plan rules, create a plan to refactor this index.html into a proper architecture

- js files under web-content/js/...
- main.js entrypoint, type module 
- cv-utils.js for canvas utilities
- cv-animation.js for animations
- easing.js for easing
- And any other good designed js. 
- Also config.js for holding the config values. 
- css/main.css
- start with the main js css refactor. 
</code></code></pre><h2><strong>Conclusion</strong></h2><p>This was a very simple example of how to apply some Pro AI Coding practices even when the start was vibe coded.</p><p><a href="https://www.linkedin.com/posts/jeremychone_production-vibecoding-aiproduction-share-7390494538693525504-H4As">Here</a> are some more tips about <a href="https://www.linkedin.com/posts/jeremychone_production-vibecoding-aiproduction-share-7390494538693525504-H4As">Vibe Coding to AI Pro Coding</a></p><p>Until next time, happy coding!</p>]]></content:encoded></item><item><title><![CDATA[AI Knowledge Optimization - AKO AIPack Demo!]]></title><description><![CDATA[Turn 500MB of HTML Noise into 0.5MB of AI-Optimized Content in 5 Minutes with AIPack `lab@ako`]]></description><link>https://news.aipack.ai/p/aipack-ai-knowledge-optimization</link><guid isPermaLink="false">https://news.aipack.ai/p/aipack-ai-knowledge-optimization</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Tue, 21 Oct 2025 14:31:13 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/176699856/f25191bb40f788a5f684e2a6d4859692.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this video, we&#8217;ll demonstrate the power of <strong>Agentic Knowledge Optimization</strong> with AI Pack. We&#8217;re tackling a key challenge enterprises face today: transforming large volumes of noisy, website knowledge, like API documentation, into highly optimized, LLM-ready content.</p><p>Our use case is both simple and impactful: converting approximately <strong>500MB of raw HTML documentation</strong> into just <strong>0.5MB of refined, AI-optimized content</strong> using the AIPack <code>lab@ako</code>.</p><p>Runs in a <strong>Terminal User Interface</strong>, compatible with <strong>Windows, Mac, and Linux</strong>, on desktop or cloud.</p><p>Your <strong>API Key</strong>, <strong>Your Way</strong>, <strong>Open source</strong>, <strong>High power</strong>, <strong>Zero fluff.</strong></p><h3><strong>The Demo Walkthrough</strong></h3><p>We&#8217;ll demonstrate how to easily install and set up the <code>lab@ako</code> AI Pack. For this demo, we&#8217;re using public documentation from <code>alchemy.com</code>.</p><p><strong>Here&#8217;s the demo plan:</strong></p><ul><li><p>Install <code>lab@ako</code>, the Agentic Knowledge Optimization AI Pack deployed on <strong><a href="http://aipack.ai">aipack.ai</a></strong>.</p></li><li><p>Run <code>lab@ako</code> on a small set of files to understand how the agent processes data and validates outputs.</p></li><li><p>Scale the operation by downloading 500MB of raw HTML and increasing <strong>concurrency</strong> to <strong>32 parallel tasks</strong>.</p></li><li><p>Watch as <strong>30 minutes</strong> of sequential processing is condensed into just <strong>5 minutes</strong> of high-concurrency performance.</p></li></ul><h3><strong>The Agentic Workflow</strong></h3><p>The <code>lab@ako</code> agent follows a structured, multi-sub-agent pipeline designed for deep optimization:</p><ol><li><p><strong>Fetch &amp; Slim:</strong> The process begins by fetching raw documentation files. The agent then &#8220;slims&#8221; them by stripping away noisy HTML and converting the content into clean Markdown. Each resulting Markdown file includes a concise, AI-generated summary.</p></li><li><p><strong>Clean &amp; Augment:</strong> Each Markdown file is refined further&#8212;structural issues are fixed, and the content is enhanced to ensure clarity, coherence, and contextual richness for LLM reasoning.</p></li><li><p><strong>Final Indexing:</strong> Using the summaries from step one, the AI compiles a comprehensive index file, <code>llm.md</code>. This file serves as a <strong>single source of truth</strong> for LLMs, enabling fast and precise knowledge retrieval.</p></li></ol><p>In summary, <code>lab@ako</code> transforms <strong>500MB of noisy HTML</strong> into <strong>0.5MB of clean, structured, and AI-optimized Markdown</strong>, complete with a specialized <code>llm.md</code> index file&#8212;ready to power AI systems for accurate and efficient knowledge reasoning.</p><h3><strong>What&#8217;s Next?</strong></h3><p>Building on the success of <code>lab@ako</code>, we&#8217;re preparing to launch <code>lab@kchat</code>, our upcoming Knowledge Chat solution. This next AI Pack will allow users to deploy the <code>kchat</code> agent and begin <strong>interacting directly with their optimized content</strong>&#8212;turning static documentation into an intelligent, conversational knowledge experience.</p>]]></content:encoded></item><item><title><![CDATA[AIPACK 0.8 Released - First Demo ]]></title><description><![CDATA[(demo@proof reader pack)]]></description><link>https://news.aipack.ai/p/aipack-08-released-first-demo</link><guid isPermaLink="false">https://news.aipack.ai/p/aipack-08-released-first-demo</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Mon, 01 Sep 2025 14:09:33 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/172445559/454cc8143ef1012718d77369cb6c9f4e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>AIPACK 0.8 has been released. Here is the first demo video.</p><p>We'll demonstrate how to use the built-in <code>demo@proof</code> (proofreader) AI Pack, which proofreads a prompt or one or more files.</p><p>- 00:00 - Intro<br>- 00:11 - Setup<br>- 00:54 - First run of demo@proof with parametric prompt<br>- 02:14 - Craft Custom Text Example<br>- 03:23 - Model Reasoning Suffix<br>- 03:57 - Proof English<br>- 04:30 - Proof Multiple Files<br>- 06:05 - Input Concurrency to 3<br>- 06:53 - AIPACK Base Config<br>- 08:15 - What Next</p><p>You can install AIPACK from https://aipack.ai</p><p>And see many videos and tutorials at https://news.aipack.ai/archive </p><p>(<a href="https://youtu.be/h2FHbteWjvU">This video on YouTube</a>)</p>]]></content:encoded></item><item><title><![CDATA[Live Demo • Agentic Knowledge Optimization • Wed, 28 May • 11am PT]]></title><description><![CDATA[Shrink 150 MB of noisy HTML site to 0.9 MB of AI-Optimized Markdown]]></description><link>https://news.aipack.ai/p/live-demo-agentic-knowledge-optimization</link><guid isPermaLink="false">https://news.aipack.ai/p/live-demo-agentic-knowledge-optimization</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Fri, 23 May 2025 14:30:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bccce201-3600-4004-b71c-80c4ac12cc24_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Gqnh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 424w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 848w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 1272w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Gqnh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png" width="800" height="200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:200,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:115821,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://news.aipack.ai/i/164209325?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Gqnh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 424w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 848w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 1272w, https://substackcdn.com/image/fetch/$s_!Gqnh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71cc4087-21b0-4296-9d17-4d55e6a705bc_800x200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><a href="https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b">&#128073; Live Demo Registration</a></p><p>I&#8217;m hosting a hands-on webinar on <strong><a href="https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b">Wed, 28 May &#8226; 11 AM PT</a></strong> where we&#8217;ll:</p><ul><li><p><strong>150 MB</strong> of raw HTML &#8594; <strong>0.9 MB</strong> of <strong>AI-Optimized</strong> Markdown</p></li><li><p><strong>Strip out</strong> menus, ads, and noise (<strong>~</strong> 3.5 MB Slimmed HTML, ~ 1.2 MB Raw Markdown)</p></li><li><p><strong>Agentic Clean &amp; Augment &#11157;</strong> <strong>0.9 MB</strong> of clean</p></li><li><p>Result <strong>&#11157;</strong> <strong>Highest content-to-noise</strong> ratio content</p></li><li><p><strong>Do it all</strong> with simple AIPACK agents in under <strong>few minutes</strong></p></li></ul><p>See you on <strong><a href="https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b">Wed, 28 May &#8226; 11 AM PT</a></strong> </p><p><a href="https://events.teams.microsoft.com/event/ebc4bb9c-7198-435a-b4e4-1e2db9c16600@cee695b0-8c30-4889-9369-67b2d6fb3b6b">&#128073; Live Demo Registration</a></p><blockquote><p>Can&#8217;t make it live? Drop a comment below with what you&#8217;d most like to see, and we will send you the replay!</p></blockquote>]]></content:encoded></item><item><title><![CDATA[AIPACK Tutorial – From Hello World to HTML AI Optimizer Agent ]]></title><description><![CDATA[Create your first my-agent.aip AI PACK agent from multi-input Hello World to an HTML AI Optimizer]]></description><link>https://news.aipack.ai/p/aipack-tutorial-from-hello-world</link><guid isPermaLink="false">https://news.aipack.ai/p/aipack-tutorial-from-hello-world</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Mon, 12 May 2025 14:03:05 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163369693/ec9cc328cccb24e35e14c5e178799d1e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this tutorial, we are going to create our first AIPACK agent, <code>my-agent.aip</code>.</p><p>First, we&#8217;ll do a quick walkthrough of the prerequisites:</p><ul><li><p>Install AIPACK (the <code>aip</code> command line) from <a href="https://aipack.ai">https://aipack.ai</a></p></li><li><p>Start VSCode in a folder and set the desired AI Providers' API Keys</p></li><li><p>(optional) Install the optional <a href="https://marketplace.visualstudio.com/items?itemName=aipack.aipack">VSCode AIPACK</a> extension.</p></li></ul><p>Here is the <a href="https://github.com/aipack-ai/aipack-lab/tree/main/hello-world">AIPACK-Lab folder for this project</a>.</p><p>Then, we will do the hello world example, which has two main sections.</p><h2><strong>1) Hello World</strong></h2><p>In the hello world section of the tutorial, we will learn how to:</p><ul><li><p>Create a simple <code>my-agent.aip</code> with hardcoded input (in the data <code># Data</code>) and use debug print in the output (<code># Output</code>) (without AI - good for debugging).</p></li><li><p>Activate the <code># Instruction</code> section to send the request to AI.</p></li><li><p>Use multiple inputs from the command line with <code>aip run -i ... -i ...</code>.</p></li><li><p>Change the model and use the <code>-zero</code> Gemini 2.5 suffix to control the thinking budget.</p></li><li><p>Append (<code>aip.file.append</code>) content to a single file for all concurrent inputs.</p></li></ul><h2><strong>3) HTML Optimizer</strong></h2><p>Then, we will update the <code>my-agent.aip</code> tutorial to learn how we can AI optimize an HTML code documentation page using AIPACK slim HTML and markdown API, forwarding the slimmed markdown content to AI to get a small but semantically rich markdown file.</p><p>We will use the following AIPACK APIs:</p><ul><li><p><code>aip.web.get</code> to get the original HTML content.</p></li><li><p><code>aip.file.save_html_to_slim</code> to slim the original HTML from <strong>1.5MB</strong> to <strong>57KB</strong>.</p></li><li><p><code>aip.file.save_html_to_md</code> to save the slimmed HTML as markdown (<strong>10KB</strong>).</p></li><li><p>Craft a custom instruction to have <code>gemini-flash-2.5-flash</code> structure and reconstruct the code blocks.</p></li><li><p><code>aip.md.outer_md_block_or_raw</code> to remove any encompassing markdown code block.</p></li></ul><p>Check out all of the <a href="https://aipack.ai/doc/lua-apis">AIPACK Lua APIs Documentation</a> for more information.</p><p>Feel free to subscribe to <a href="https://news.aipack.com/">AIPACK Substack at news.aipack.com</a> for more info and tutorials.</p>]]></content:encoded></item><item><title><![CDATA[Production Coding Example with pro@coder AIPACK]]></title><description><![CDATA[Here is an example of how I do production coding with AI. In this video, I add a real feature to a real codebase.]]></description><link>https://news.aipack.ai/p/production-coding-example-with-procoder</link><guid isPermaLink="false">https://news.aipack.ai/p/production-coding-example-with-procoder</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Fri, 18 Apr 2025 14:31:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/161556467/2aab77d91661587693d20230bdc2745a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>I use pro@coder AI Pack, which in turn uses the pro@rust10x AI Pack that contains my Rust best practices.</p><p>In this example, I add a feature to AIPACK itself, which has about 20k LOC, and I use the new OpenAI GPT-4.1, but I can switch to any model from the prompt file on a per-request basis.</p><p>It took me a while to fully embrace AI for PRODUCTION CODING, as the overwhelming Vibe Coding noise was distracting and frustrating. But once we use AI to code our way, the force multiplier is really exhilarating.</p><p>Embrace AI for PRODUCTION CODING Today, your way. </p><p>If you want to try AIPACK for production coding, </p><p>- you can install AIPACK https://aipack.ai</p><p>- See a demo with pro@coder https://news.aipack.ai/p/procoder-ai-pack-demo-generate-doc</p>]]></content:encoded></item><item><title><![CDATA[New OpenAI Models - Price Comparison]]></title><description><![CDATA[Here is a quick video comparing the prices of the new OpenAI Models]]></description><link>https://news.aipack.ai/p/new-openai-models-price-comparison</link><guid isPermaLink="false">https://news.aipack.ai/p/new-openai-models-price-comparison</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Thu, 17 Apr 2025 16:01:41 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/161510410/a108f52d4b0cc5470f090938ae62bb24.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Two new models from OpenAI, o4-mini, and o3. Here&#8217;s how their prices stack up.<br><br>Coding comparison video coming... quality + price per commit comparison.<br><br>Also, worth noting: GPT-4.1-mini and GPT-4.1-nano are very cheap and fast. That matters a lot for simple coding or documentation tasks.<br><br>Check out this interactive chart on AIPACK &#8211; <strong><a href="https://lnkd.in/e8SYgxMn">https://lnkd.in/e8SYgxMn</a></strong><br><br>This interactive chart was built with</p><ul><li><p>`pro@coder` AI Pack for the code</p></li><li><p>Custom dev/pricing AIPACK Agents build the JSON from providers' pricing pages</p></li></ul><p>Note that dev/pricing also uses the same data to generate Rust code, enabling the `aip` binary to compute prices for all models.</p>]]></content:encoded></item><item><title><![CDATA[Starting a Rust Project with Rust10x best practices]]></title><description><![CDATA[This quick demo demonstrates the power of AIPACK Agent Composability by combining the pro@coder and pro@rust10x AI Packs to start a new Rust project with good production coding best practices.]]></description><link>https://news.aipack.ai/p/starting-a-rust-project-with-rust10x</link><guid isPermaLink="false">https://news.aipack.ai/p/starting-a-rust-project-with-rust10x</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Wed, 02 Apr 2025 14:31:03 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/160378618/7c4d6eed46905a296ff927a595b833e0.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This video shows how to kickstart a simple Rust programming project with AI, following the best practices I've been sharing on this channel.</p><p>It works by using the power of the AIPACK runtime. I started putting my best practices into a pro/Rust10x AI Pack that can be used with the pro/coder AI Pack (</p><pre><code><em># Install AIPACK
</em><code>cargo install aipack</code></code></pre><pre><code><em># Install the Pro coder &amp; rust10x packs
</em><code>aip install pro@coder
aip install pro@rust10x</code></code></pre><p>Then, in your project folder you do: </p><pre><code><code>aip run pro@coder</code></code></pre><p>This will create a <code>.aipack/.prompt/pro@coder/coder-prompt.md</code></p><p>Which will be your coder prompt. </p><p>Then, in the top toml code block you can replace it with this: </p><pre><code><code>```toml
#!meta - parametric agent block

knowledge_globs = ["pro@rust10x/guide/base/**/*.md"]
base_dir = ""

context_globs = ["Cargo.toml", "src/**/*.rs"] 

model_aliases = {pro = "claude-3-7-sonnet-latest", gpro = "gemini-2.5-pro-exp-03-25", high = "o3-mini-high", low = "o3-mini-low", cheap = "gpt-4o-mini", fast = "gemini-2.0-flash"}

write_mode = true

model = "flash"
```</code></code></pre><p>And then, you can follow the demo.</p><p>Obviously, you can have your own best practices. Everything is open source, even those pro packs. You can find the pro packs at:</p><p><a href="https://github.com/aipack-ai/packs-pro">https://github.com/aipack-ai/packs-pro</a></p><p>You can make your own pack with</p><pre><code><code>aip pack path/to/agent-folder/</code></code></pre><p>More on this later. Comments and like if you want to seem more of those type of content. <br><br>Until next time, happy coding!</p>]]></content:encoded></item><item><title><![CDATA[pro@coder AI Pack Demo - Generate Doc]]></title><description><![CDATA[AI Pack pro@coder: Summarizing Codebase with Production-Grade AI Workflows]]></description><link>https://news.aipack.ai/p/procoder-ai-pack-demo-generate-doc</link><guid isPermaLink="false">https://news.aipack.ai/p/procoder-ai-pack-demo-generate-doc</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Mon, 31 Mar 2025 02:07:34 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/160225272/2c783130d798d06f9989c103c56afb60.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this demo, we showcase how to <strong>generate structured documentation</strong> for a codebase using the <strong>AI Pack ProCoder</strong>, following a <strong>production-first</strong> approach.</p><h3><strong>Step-by-Step Setup</strong></h3><p><strong>Install AI Pack CLI</strong><br>Make sure you have Rust installed.</p><pre><code><code># Install AIPACK (aip command line)
cargo install aipack

# Install pro@coder
aip install pro@coder
</code></code></pre><p>This will install the <strong>prop@code</strong>r aipack at <code>~/.aipack-base/pack/installed/pro/code</code></p><h3><strong>&#128273; Check Your API Keys</strong></h3><p>Ensure your environment has the necessary keys:</p><pre><code><code>aip check-keys
</code></code></pre><h3><strong>&#128640;  Run pro@coder</strong></h3><pre><code><code>aip run pro@coder
</code></code></pre><p>This launches the AI agent with a customizable prompt system. By pro@coder convention, prompts live in:</p><pre><code><code>.aipack/.prompt/pro@coder/coder-prompt.md
</code></code></pre><h3><strong>&#128736;&#65039; Configure the Prompt</strong></h3><ul><li><p>The prompt supports <strong>parameters</strong> via a <code>toml</code> code block.</p></li><li><p>Set the model (e.g. <code>gemini2-flash</code> for fast/free or <code>gmini2.5-pro</code> for high quality).</p></li><li><p><strong>Write Mode</strong>:</p><ul><li><p>Set to <code>false</code> to avoid file generation.</p></li><li><p>Set to <code>true</code> to enable the agent to write output files.</p></li></ul></li></ul><h3><strong>&#128194; Step 1: Generate Per-File Summaries</strong></h3><ol><li><p><strong>Set Working Globs</strong> (for parallel file processing):</p></li></ol><pre><code><code>...
# context_globs = ...

working_globs = ["web/content/**/*"]
working_concurrency = true
input_concurrency = 6
write = true
</code></code></pre><ol><li><p><strong>Prompt Example</strong>:</p></li></ol><pre><code><code>Can you summarize the working files? Save each file's summary in .doc/summary-&lt;normalized-path&gt;.md
</code></code></pre><ol><li><p>Run the prompt (e.g., using <code>R</code> inside the ProCoder interface).</p></li></ol><p>&#9989; <strong>Result</strong>: Summaries for each source file saved in <code>.doc/</code>.</p><h3><strong>&#128216; Step 2: Create a Consolidated Summary</strong></h3><ol><li><p><strong>Switch to Context Globs</strong>:</p></li></ol><pre><code><code>...

context_globs = [".doc/summary-*.md"]

# working_globs       = ["web-content/**/*.*"]
# working_concurrency = true
# input_concurrency   = 6
</code></code></pre><ol><li><p><strong>Prompt Example</strong>:</p></li></ol><pre><code><code>Can you summarize your context files and save the result in doc/summary.md
</code></code></pre><ol><li><p>Run again with <code>write = true</code>.</p></li></ol><p>&#9989; <strong>Result</strong>: A clean, full summary generated at <code>doc/summary.md</code>.</p><h3><strong>&#10024; Highlights</strong></h3><ul><li><p><strong>Tiny Pack</strong>: ~9KB binary (pro@coder)</p></li><li><p><strong>Offline &amp; Secure</strong>: Works with local keys, local models, or remote models.</p></li><li><p><strong>Highly concurrent</strong>: Up to 6 files processed at once.</p></li><li><p><strong>Any AI Models/Providers</strong>: Gemini, OpenAI, Anthropic, Ollama, Groq, xAI</p></li><li><p><strong>Production Practices</strong>: Modular, guided prompts and output control.</p></li></ul><h3><strong>&#129504; Pro Tips</strong></h3><ul><li><p>Use <strong>working globs</strong> for parallel processing.</p></li><li><p>Use <strong>context globs</strong> for giving an hollistic context (both Working and Context globs can be combined)</p></li><li><p><code>.gitignore</code> Ignore, the `.*`  for temporary output.</p></li></ul><p>That&#8217;s it! With <strong>AI Pack pro@coder</strong>, you can turn any codebase into clean, structured documentation in no time, and much more. </p><p>Until next time&#8212;<strong>happy coding!</strong></p>]]></content:encoded></item><item><title><![CDATA[Quick Demo - @coder AI Pack]]></title><description><![CDATA[Here is a quick AI coding demo using the AIPACK runtime and my JC Coder AI Pack for production coding.]]></description><link>https://news.aipack.ai/p/quick-demo-coder-ai-pack</link><guid isPermaLink="false">https://news.aipack.ai/p/quick-demo-coder-ai-pack</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Sun, 09 Mar 2025 00:52:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/158677139/2ef1f639f8c26bf181cdf7f0fbd8ef32.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Here is a quick AI coding demo using the AIPACK runtime and my JC Coder AI Pack for production coding.</p><p>While this demo is very simple and might fall into the "snake game" AI coding category, it demonstrates the flexibility of the JC Coder pack&#8212;which I am using to code AIPACK itself and other  production projects.</p><p>Video chapters:</p><ul><li><p>00:00 - AIPACK installation and initialization</p></li><li><p>00:10 - Install and run jc coder pack</p></li><li><p>00:37 - Creating the initial Pong game</p></li><li><p>02:34 - Refactor 1 &#8211; Separating JS and CSS into their own files</p></li><li><p>03:24 - Refactor 2 &#8211; Using the type module and splitting JS files</p></li><li><p>04:47 - Adding Particle explosion</p></li><li><p>06:07 - The final result</p></li></ul><p>Check out <a href="https://aipack.ai">aipack.ai</a> and subscribe to this Substack for more updates: </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:4287218,&quot;name&quot;:&quot;AIPACK Substack&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F610839df-0ec0-422c-af3c-b6a207366ae3_256x256.png&quot;,&quot;base_url&quot;:&quot;https://news.aipack.ai&quot;,&quot;hero_text&quot;:&quot;News, insights, and applications of Agentic Runtimes and AIPACK.&quot;,&quot;author_name&quot;:&quot;Jeremy Chone&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:null,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://news.aipack.ai?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!Pzgt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F610839df-0ec0-422c-af3c-b6a207366ae3_256x256.png" width="56" height="56"><span class="embedded-publication-name">AIPACK Substack</span><div class="embedded-publication-hero-text">News, insights, and applications of Agentic Runtimes and AIPACK.</div><div class="embedded-publication-author-name">By Jeremy Chone</div></a><form class="embedded-publication-subscribe" method="GET" action="https://news.aipack.ai/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><p>Key Concepts for the <strong>jc@coder</strong> Pack:</p><ul><li><p><strong>Works with any IDE</strong>, as it is just a command line tool (Tip: map "cmd + r" to send 'r' to the terminal).</p></li><li><p><strong>File-based</strong> &#8211; the prompt is a single Markdown file, with the top section containing the instructions and the bottom containing the AI response. So, <em>no copy/paste code into the terminal</em> or get lost in chats.</p></li><li><p><strong>Fully parametric</strong> &#8211; thanks to AIPACK parametric agent support, JC Coder allows the prompt to customize the model, knowledge/source context files, and even concurrency and temperature.</p></li><li><p>AI response info &#8211; returns <strong>usage and cost details</strong> in the prompt file.</p></li><li><p>The JC Coder AIPACK is extremely small (currently under <strong>8 KB</strong> zipped) and implements all of the agent coding logic.</p></li><li><p>Once installed, the JC Coder Pack can be found at <code>~/.aipack-base/pack/installed/jc/coder</code>. For very advanced users, it can be copied into your <code>~/.aipack-base/pack/custom/my/coder</code> directory so that you can customize it and run it with <code>aip run my@coder</code>.</p></li></ul><p>Key Concepts/Benefits for AIPACK:</p><ul><li><p>AIPACK is built with Rust, ensuring <strong>efficiency and high concurrency</strong>.</p></li><li><p>It leverages the Rust genai crate, which supports <strong>all major AI providers</strong> and models.</p></li><li><p>AIPACK is an <strong>agentic runtime</strong> offering a <strong>multi-stage</strong>, Markdown-based model and Lua as the efficient embedded scripting language, providing full flexibility and simplicity.</p></li><li><p>AIPACK features an innovative <strong>parametric agent model</strong> that allows agents to extract deterministic parameters directly from the prompt&#8212;or even from the AI response.</p></li><li><p>AIPACK is designed for concurrency, allowing agent packs like JC Coder to parallelize agent tasks across multiple input files (demo video coming).</p></li><li><p>AIPACK can run any multi-stage agent Markdown file using a command like: <code>aip run path/my-agent-aip</code></p></li><li><p>It can also pack, install, and run AI Packs whenever users are ready to share their agents.</p></li><li><p>See intro video at </p></li></ul><p>Subscribe to this Substack for news and updates about AIPACK. It's just the beginning &#8211; things are about to pick up speed, thanks to AIPACK.</p><p>Until next time &#8211; happy running!</p>]]></content:encoded></item><item><title><![CDATA[AIPACK Introduction]]></title><description><![CDATA[Run, Build, and Share your AI Packs]]></description><link>https://news.aipack.ai/p/aipack-introduction</link><guid isPermaLink="false">https://news.aipack.ai/p/aipack-introduction</guid><dc:creator><![CDATA[Jeremy Chone]]></dc:creator><pubDate>Wed, 05 Mar 2025 14:03:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/158417457/08de4614ed6960b5620d1330c91587a1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>AIPACK is an Open Source Agentic Runtime to run, build, and share AI Packs.</p><ul><li><p>It supports <strong>all</strong> major AI Providers and Models.</p></li><li><p>It's efficient and small, below 20 megabytes with zero dependencies.</p></li><li><p>It can be run locally, completely IDE Agnostic, or in the cloud, server, or serverless.</p></li></ul><p>So, what is an AI PACK?</p><p>An AIPack is a <strong>.aipack</strong> cross-platform file that can be installed, run, built, and shared.</p><p>An AIPack file contains:</p><ul><li><p><strong>Agents</strong>: One or More Agent Files, the <code>.aip</code> multi-stage markdown file.</p></li><li><p><strong>Logic</strong>: Any number of Lua files, which provide a very efficient and cross-platform way to add logic to our agent.</p></li><li><p><strong>Data</strong>: Any type of data from markdown, JSON, CSV, and even later SQLite data.</p></li></ul><p>An AI Pack agent is a single <strong>.aip</strong> file defined within a multi-stage markdown file.</p><p>There are three main stages:</p><ol><li><p>The first stage is the <strong>data stage</strong>, which contains the logic to fetch and prepare the data for the prompt.</p></li><li><p>The <strong>prompt </strong>templating <strong>stage</strong> gives complete control over the prompt layout using the data from the previous stage. This is what will get sent to the AI Model.</p></li><li><p>The AI Model response will be sent to the <strong>output stage</strong> for processing.</p></li></ol><p>The AIPACK Runtime is built to run an agent in parallel when multiple inputs are given, providing full parallelism with minimal complexity.</p><p>An agent can define map-reduce stages, including <strong>Before All</strong> and <strong>After All</strong> stages, enabling advanced concurrent processing and input reshaping. This provides unparalleled flexibility, concurrency, and simplicity.</p><p>You can find more information about AIPACK at <strong><a href="https://aipack.ai">aipack.ai</a></strong></p><p>Feel free to subscribe to this Substack for the latest news and updates:</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://news.aipack.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">News, insights, and applications of Agentic Runtimes and AIPACK.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It's just the beginning, but we already have quite a bit working, and new features are coming fast.</p><p>Until next time, happy running.</p>]]></content:encoded></item></channel></rss>