<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://kubogi.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kubogi.github.io/" rel="alternate" type="text/html" /><updated>2026-05-08T04:50:36+00:00</updated><id>https://kubogi.github.io/feed.xml</id><title type="html">Hai’s Blog</title><subtitle>I do code sometimes</subtitle><author><name>Hai Hoang</name></author><entry><title type="html">Using Python to Win a JavaScript Contest (CS1101S: Game of Tones)</title><link href="https://kubogi.github.io/2026/01/06/cs1101s-sound.html" rel="alternate" type="text/html" title="Using Python to Win a JavaScript Contest (CS1101S: Game of Tones)" /><published>2026-01-06T00:00:00+00:00</published><updated>2026-01-06T00:00:00+00:00</updated><id>https://kubogi.github.io/2026/01/06/cs1101s-sound</id><content type="html" xml:base="https://kubogi.github.io/2026/01/06/cs1101s-sound.html"><![CDATA[<p><em>Solving a music problem with no musical intuition</em></p>

<h2 id="1-context">1. Context</h2>

<p>CS1101S is an introductory computer science course that teaches core programming concepts using Source, a JavaScript-like language. Along the way, the course runs a few optional creative contests that let students apply these ideas outside of standard problem sets.</p>

<p>Past contests include Beautiful Runes (visual art with Source runes) and The Choreographer (programmatically generated curves). I was especially drawn to Game of Tones, the sound contest, because it felt the most programmable - instead of hand-tuning visuals, it’s about generating music through code, which made it a perfect candidate for automation.</p>

<h2 id="2-finding-the-right-song">2. Finding the right song</h2>

<h3 id="constraints">Constraints</h3>

<p>Sounds simple enough, right? Not really. There were some surprisingly tight constraints I had to consider when choosing the song:</p>

<ul>
  <li>MIDI availability (non-negotiable)
    <ul>
      <li>This is an obvious requirement, and also the biggest bottleneck of all</li>
      <li>Automating music without a MIDI file is extremely difficult (I tried)</li>
    </ul>
  </li>
  <li>Complexity
    <ul>
      <li>Multiple layers, non-trivial melody</li>
      <li>Interesting enough that hand-coding would be tedious</li>
    </ul>
  </li>
  <li>Source-friendly
    <ul>
      <li>Simple instrumentation</li>
      <li>Minimal timbre, effects, or texture</li>
      <li>Sounds that can realistically be approximated from scratch in Source</li>
    </ul>
  </li>
</ul>

<p><em>(Source’s sound module is fairly minimal. There are only a few built-in waveforms, and there’s no large library of presets like you’d find in a DAW. Anything that relies on rich instruments, effects, or sound design becomes very hard to reproduce in Source.)</em></p>

<ul>
  <li>Performance constraints
    <ul>
      <li>Reasonable note count</li>
      <li>Should not lag or freeze the runtime</li>
    </ul>
  </li>
</ul>

<p><em>(For the contest, other students vote by actually running your code on their own machines. That means performance directly affects first impressions — long wait times would almost certainly hurt votes. For context, I’ve seen submissions with 2-3 minute wait times.)</em></p>

<h3 id="searching-for-songs">Searching for songs</h3>

<p>I tried several directions initially, but most failed:</p>
<ul>
  <li>Game soundtracks
    <ul>
      <li>Often Source-friendly in terms of complexity</li>
      <li>Public MIDIs are rare or nonexistent</li>
    </ul>
  </li>
  <li>Rhythm game songs
    <ul>
      <li>Fanmade MIDIs are often available (shoutout to <a href="https://www.youtube.com/@uaaaaak5622">@Uaaaaak</a>!)</li>
      <li>Extremely dense and layered, which makes it really difficult to replicate and optimize in Source</li>
      <li>Usually include heavy effects and rich instruments</li>
    </ul>
  </li>
</ul>

<audio controls="">
  <source src="/assets/audio/2026-01-06-cs1101s-sound/raputa.mp3" type="audio/mpeg" />
  Your browser does not support the audio element.
</audio>
<p><em>Song: (From maimai) “raputa” sasakure.‌UK × TJ.hangneil</em></p>

<p><em><strong>Listening note:</strong> Extremely dense layering, rapid note changes, and the use of complex instrumentation make this impractical to reproduce or optimize in Source.</em></p>

<ul>
  <li>Mainstream pop / OP-style arrangements
    <ul>
      <li>MIDIs are not usually available, which already limits candidate songs</li>
      <li>When MIDIs are available, the arrangements tend to be structurally simple</li>
      <li>Automation offers little advantage over manual coding</li>
    </ul>
  </li>
</ul>

<audio controls="">
  <source src="/assets/audio/2026-01-06-cs1101s-sound/hitchcock.mp3" type="audio/mpeg" />
  Your browser does not support the audio element.
</audio>
<p><em>Song: Yorushika - Hitchcock</em></p>

<p><em><strong>Listening note:</strong> Structurally simple with sparse layering, meaning automation offers little advantage over hand-coding.</em></p>

<p>After filtering through a lot of mainstream pop candidates, Lagtrain (by inabakumori) stood out as one of the few that actually satisfied all the constraints.</p>
<ul>
  <li>A usable MIDI was available (<a href="https://www.youtube.com/watch?v=jVgWdFfnxDQ">Shoutout to Latency!</a>)</li>
  <li>Fairly layered, but simple enough to be implemented in Source</li>
  <li>Manageable note density and acceptable performance</li>
</ul>

<p>It wasn’t the most complex song structurally, but it crossed the point where automation had a clear advantage.</p>

<audio controls="">
  <source src="/assets/audio/2026-01-06-cs1101s-sound/lagtrain.mp3" type="audio/mpeg" />
  Your browser does not support the audio element.
</audio>
<p><em>Song: inabakumori - Lagtrain</em></p>

<p><em><strong>Listening note:</strong> Just complex enough to benefit from automation, but simple enough in terms density to run smoothly in Source.</em></p>

<p>In the end, this felt less of a musical problem and more of an engineering and design problem: choosing the right input to match the system you’re building.</p>

<h2 id="3-technical-details">3. Technical details</h2>

<h3 id="instrument-selection-and-reduction">Instrument Selection and Reduction</h3>

<p>The original MIDI file contained around 15 instrument tracks. Generating all of them in Source would have been impractical, both in terms of sound quality and performance. Many tracks were either textural, redundant, or too subtle to meaningfully contribute once translated to Source’s limited sound system.</p>

<p>To address this, I filtered the MIDI down to five core instruments, using MidiEditor to listen to and isolate tracks one by one:</p>
<ul>
  <li>Vocal</li>
  <li>Piano</li>
  <li>Ocarina</li>
  <li>Percussion</li>
  <li>Bass</li>
</ul>

<p>This preserved the identity of the song while significantly reducing complexity and runtime cost. The result doesn’t need to be perfect, it just needs to be a good balance between expressiveness and practicality.</p>

<h3 id="mapping-instruments-to-source-sounds">Mapping Instruments to Source Sounds</h3>

<p>Source’s sound module is extremely basic: a handful of waveforms, ADSR envelopes, and simple effects. No presets, no filters, and no built-in tools for tasks beyond layering and envelopes. Every instrument has to be built from scratch.</p>

<p>Most of the sound design process was iterative and empirical. I relied heavily on:</p>
<ul>
  <li>Tweaking waveforms and ADSR parameters</li>
  <li>Reading Source documentation</li>
  <li>Googling synthesis techniques</li>
  <li>Most importantly: just plain trial and error (listening and tweaking the code over and over - it took a lot of time)</li>
</ul>

<p>For melodic instruments like vocals, ocarina, and bass, I used simple waveforms (triangle/square) combined with ADSR envelopes:</p>

<ul>
  <li>Ocarina: triangle + softer ADSR</li>
  <li>Vocal: square + sharper envelope</li>
  <li>Bass: short attack + low sustain</li>
</ul>

<p>However, percussion required a different approach. Instead of pitched notes, I combined noise, basic oscillators, and aggressive envelopes to approximate drums:</p>

<ul>
  <li>Kick: sine + phase modulation</li>
  <li>Snare: tone + white noise</li>
  <li>Hi-hats: shaped white noise with different decay times</li>
</ul>

<p>Fun fact: I found that Source’s built-in cello instrument actually approximated a piano chord better than the piano itself. This led to me using layered cello chords to represent piano chords, which sounds pretty cursed, but it works..</p>

<p>With the instruments in place, the next step was turning the MIDI itself into playable Source code.</p>

<h3 id="from-midi-to-source-a-tiny-compiler-pipeline">From MIDI to Source: a tiny compiler pipeline</h3>

<p>The filtered MIDI file contained nearly 1,000 notes combined. It’s clear that manually writing <code class="language-plaintext highlighter-rouge">note(...)</code> calls was no longer viable. The MIDI file already had everything I needed — I just need to translate it into something Source could actually understand.</p>

<p>I cut off the song at 49 seconds, which is exactly right after the intro ends and the first verse starts. In theory, I could code it for the entire song, but for performance purposes, I want to keep loading time under 30 seconds.</p>

<p>Here’s a high-level overview of how MIDI files work:</p>
<ul>
  <li>Things are broken down into ticks, not time</li>
  <li>MIDI notes are spilt across messages. Each message has a type (<code class="language-plaintext highlighter-rouge">note_on</code> / <code class="language-plaintext highlighter-rouge">note_off</code>), pitch, and the timestamp (in ticks) of that message.</li>
  <li>Each track/instrument has its own independent list of messages</li>
</ul>

<p>In short, MIDI doesn’t give you notes directly, it gives you events. Which means I had to pair <code class="language-plaintext highlighter-rouge">note_on</code> and <code class="language-plaintext highlighter-rouge">note_off</code> messages to get actual notes.</p>

<p><strong>Reconstructing notes from events</strong></p>

<p>This is my general idea to turn MIDI events into real notes:</p>
<ul>
  <li>Convert tick counts into timestamps (in seconds)</li>
  <li>Keep track of “active” notes (notes that have received a <code class="language-plaintext highlighter-rouge">note_on</code> but not a <code class="language-plaintext highlighter-rouge">note_off</code> yet)</li>
  <li>A <code class="language-plaintext highlighter-rouge">note_on</code> starts a note</li>
  <li>A <code class="language-plaintext highlighter-rouge">note_off</code> ends it</li>
  <li>The duration of a note is simply the difference between the two timestamps</li>
</ul>

<p><strong>Implementing in Source</strong></p>

<p>Source offers 2 ways of combining sounds:</p>
<ul>
  <li>either simultaneously through the <code class="language-plaintext highlighter-rouge">simultaneously(list(...))</code> command</li>
  <li>or iteratively through the <code class="language-plaintext highlighter-rouge">consecutively(list(...))</code> command</li>
</ul>

<p>The simplest way would be:
For every note at timestamp t, prepend t seconds of silence, then play the note, and then finally play everything simultaneously.
This works, but has some serious issues:</p>
<ul>
  <li>Performance drops extremely fast as song size grows. (From a complexity point of view, this behaves like an O(n²) approach as the number of notes grows.)</li>
  <li>Sounds in later sections would get drowned out or noticeably quieter for unclear reasons.</li>
</ul>

<p>Hence I settled for a more efficient approach:</p>
<ul>
  <li>For each instrument, all notes are played consecutively, inserting silence only where needed</li>
  <li>Each instrument track is built independently</li>
  <li>All instrument tracks are then combined using <code class="language-plaintext highlighter-rouge">simultaneously(...)</code></li>
</ul>

<p>Even though this is a lot more complicated to implement, it scaled much better and loaded significantly faster than my previous approach. (From a complexity point of view, this behaves like an O(n) approach.)</p>

<details>
  <summary>Bonus: A short runtime analysis of the two approaches</summary>

  <div>
    <p>In the naive approach, every note scheduled at time t is implemented by:</p>

    <ul>
      <li>inserting t seconds of silence</li>
      <li>followed by the actual note</li>
      <li>then playing all notes simultaneously</li>
    </ul>

    <p>If the song contains n notes with increasing start times, this means:</p>
    <ul>
      <li>silence is repeated many times</li>
      <li>each additional note adds more silence than the previous one</li>
      <li>total silence duration grows roughly with the sum of all timestamps</li>
    </ul>

    <p>In effect, the total work performed grows quadratically with the number of notes.</p>

    <p>In contrast, the final approach builds each instrument track once, inserting silence only between consecutive notes. This ensures that total work grows linearly with the number of notes.</p>

    <figure style="text-align: center; margin: 2em 0;">
  <img src="/assets/images/2026-01-06-cs1101s-sound/2.png" alt="diagram" style="max-width: 100%; height: auto; display: block; margin: 0 auto;" />
  
  <figcaption style="font-style: italic; margin-top: 0.5em; color: #666;">
    Comparison of naive and optimized sound construction
  </figcaption>
  
</figure>

  </div>
</details>

<hr />

<h2 id="4-results">4. Results</h2>

<p>The full pipeline and generated Source code are available <a href="https://github.com/Kubogi/cs1101s-midi-to-source">here</a>.</p>

<p>The final output is a Source program that plays the first 49 seconds of Lagtrain, reconstructed entirely from MIDI data.</p>

<video controls="" width="600">
  <source src="/assets/video/2026-01-06-cs1101s-sound/1-web.mp4" type="video/mp4" />
  Your browser does not support the video tag.
</video>

<p>The generated code loads in around 15 seconds on my machine and plays back smoothly without noticeable lag. While it’s obviously not a perfect recreation, the melody and timing are surprisingly accurate and clearly recognizable.</p>

<p>This entry ended up winning the Game of Tones contest!</p>

<p>I think the result worked really well because MIDI already encodes strong structural timing, which I could map seamlessly into Source’s sound model.</p>

<p>This ended up taking around 2-3 days for me, with most of the time spent on trial and error. Despite that, I still had a lot of fun in the end, and winning the contest was the cherry on top.</p>

<h2 id="5-closing-thoughts">5. Closing thoughts</h2>

<p>Overall, this project was less about recreating Lagtrain and more about building a small pipeline to translate structured data into a constrained output format. Similar to my previous Bad Apple projects (such as playing it in a terminal), the core challenge was taking an existing data representation and reshaping it so it could survive under very different limitations.</p>

<p>Although the idea sounds simple on the surface, getting it to work smoothly required many careful engineering decisions — from filtering instruments and cutting off the song, to reconstructing notes from events and choosing an efficient playback strategy in Source. Each step involved tradeoffs between accuracy, performance, and practicality.</p>

<p>In the end, it was exactly this process of designing around constraints that made the project fun and rewarding, and turning it into a working, performant result — with a contest win on top — felt especially satisfying.</p>]]></content><author><name>Hai Hoang</name></author><summary type="html"><![CDATA[Solving a music problem with no musical intuition]]></summary></entry><entry><title type="html">Does vibe coding actually work? What I learned from real-world projects as a year 1 student</title><link href="https://kubogi.github.io/2025/12/28/vibe-coding.html" rel="alternate" type="text/html" title="Does vibe coding actually work? What I learned from real-world projects as a year 1 student" /><published>2025-12-28T00:00:00+00:00</published><updated>2025-12-28T00:00:00+00:00</updated><id>https://kubogi.github.io/2025/12/28/vibe-coding</id><content type="html" xml:base="https://kubogi.github.io/2025/12/28/vibe-coding.html"><![CDATA[<h1 id="1-intro">1. Intro</h1>

<h3 id="the-problem">The problem</h3>

<p>One day, I was tasked with building a full stack database management system for my mom’s university, completely on my own. At the time, I was a 12th grader whose entire background was mostly competitive programming, with basically no real world development experience (aside from a few small silly projects and Discord bots).</p>

<h3 id="early-struggles">Early struggles</h3>

<p>Fast forward a few days, I tried to learn Vue.js but couldn’t even get a decent layout on the screen. I mean, it’s pretty much expected, after all UI work is not like solving algorithmic problems. Eventually I gave up, grabbed a free template online, and started building on top of it. At least then I had something to work with. Still, progress was slow, and the result doesn’t look very promising.</p>

<p>This was also my first time using AI in a project of this scale. I was not new to the idea of vibe coding (I had used it quite a bit in past AI competitions, it was pretty fire IMO). But this is different. This isn’t code I would write for a few days and forget about. This was a real project with real users and real consequences. Going in, I didn’t even know what best practices for AI coding were supposed to look like.</p>

<figure style="text-align: center; margin: 2em 0;">
  <img src="/assets/images/2025-12-28-vibe-coding/3.png" alt="o" style="max-width: 70%; height: auto; display: block; margin: 0 auto;" />
  
  <figcaption style="font-style: italic; margin-top: 0.5em; color: #666;">
    Not the most clean code I've written...
  </figcaption>
  
</figure>

<h3 id="discovering-copilot">Discovering Copilot</h3>

<p>At first, I treated it the same way I did in competitions: prompt a chatbot, copy the code, and move on. It worked, but felt clunky and limiting. Things changed when I discovered Copilot and agent style workflows (or what people nowadays would call “vibe coding”). Suddenly, one or two sentences could turn into hundreds of lines of code.</p>

<p>That was the moment I truly felt the power everyone was talking about. It was also when I started going a little too deep down the rabbit hole.</p>

<h1 id="2-when-vibe-coding-felt-like-a-superpower">2. When vibe coding felt like a superpower</h1>

<h3 id="the-magic">The magic</h3>

<p>Ideas that would normally take hours were suddenly taking five minutes. I would type a couple of sentences, and an entire new page would just appear out of nowhere. I did not have to spend hours reading documentation, thinking through layout designs, or debugging random typos just to get something on the screen.</p>

<p>It really felt like Stack Overflow on steroids. Even ideas I struggled to clearly explain somehow came out looking right. The output felt complete, polished, and (at least on the surface) correct. It was kind of insane. A full page of functionality that would normally take forever just to piece together was suddenly there.</p>

<h3 id="the-realization">The realization</h3>

<p>That was when things really clicked for me. I started to understand why people were worried about AI taking over. This was not even like the old prompt and copy workflow I was used to. It genuinely felt like the AI was doing the work for me (at least, that is what I thought).</p>

<h1 id="3-when-things-start-to-go-wrong">3. When things start to go wrong</h1>

<h3 id="hallucinations">Hallucinations</h3>

<p>As I keep going, problems started to surface. Simple prompts kept spitting out massive chunks of code. Entire pages would show up from a short (and vague) description. But that’s exactly the problem. Those pages almost always came with way more than I asked for:</p>
<ul>
  <li>Extra features</li>
  <li>Abstractions</li>
  <li>Layers of logic I never planned on building</li>
</ul>

<p>I didn’t specify enough, and the AI happily made those decisions for me.</p>

<h3 id="loss-of-understanding">Loss of understanding</h3>

<p>And when the code finally appeared, half the time I didn’t even know what I was looking at. I just trusted what it spat out and moved on. It felt fine initially - I mean, things were still “working”, so why worry? But as the project grew, that shortcut started catching up to me.</p>

<h3 id="debugging-nightmare">Debugging nightmare</h3>

<p>Bugs started to show up. Not obvious ones, but the annoying and subtle issues buried somewhere deep inside these giant piles of “clean” code. At that point, typing “fix this” into a prompt isn’t going to cut it. There wasn’t really any other way besides diving into the code myself the old-fashioned way.</p>

<p>Trying to understand what was going on in those files was… not fun. Every bug took tens of minutes just to understand what’s going on, let alone fix. Debugging often felt slower than if I’d just written everything by hand from the start. And even when I did “fix” something, it didn’t feel like real understanding. It felt more like patching holes on a boat that was sinking faster than I could keep up.</p>

<h3 id="limits-of-ai">Limits of AI</h3>

<p>And even not talking about the bugs, I still hit the limits of vibe coding. Bigger features are always a gamble. Lots of times the AI would get stuck in loops, endlessly adding and removing files for no apparent reason. Other times it would partially rewrite or outright corrupt files that were already working.</p>

<p>I can’t count the number of times where I stared at the screen, watching it go back and forth, before sighing and typing <code class="language-plaintext highlighter-rouge">git reset --hard HEAD</code>, starting over. It felt even more exhausting than debugging myself (and I have plenty of experience debugging in competitions!)</p>

<h3 id="wake-up-call">Wake-up call</h3>

<p>That was the wake-up call. If I kept using AI the same way, the project wasn’t going to get easier, it was only going to get harder to manage. I had to rethink how I was using this tool, or consider starting over.</p>

<h1 id="4-second-attempt">4. Second attempt</h1>

<h3 id="new-project">New project</h3>

<p>After finishing this project, I wasn’t really happy with it. Even though everything worked fine, it didn’t feel like I learnt anything from this project besides patching the monstrosity that I generated over and over. But there is a chance to redeem myself. I took on another full-stack project, with pretty much the same requirements as the first one, but on a much bigger scale:</p>
<ul>
  <li>More users</li>
  <li>More data</li>
  <li>More features</li>
  <li>Higher expectations</li>
</ul>

<p>Still solo, no team, no guidance, no code reviews, and no safety net besides Git.</p>

<h3 id="new-mindset">New mindset</h3>

<p>However, my mindset had changed. I stopped thinking in terms of “ask it to build this giant thing and pray that it works” and started breaking things down into small, manageable chunks. Not only would the AI be able to easily digest that, but it also is for my own growth and experience (I still want to learn something, after all).</p>

<p>I figured, If I couldn’t even explain a task clearly, do I really expect the AI to handle it properly and reliably?</p>

<h3 id="planning">Planning</h3>

<p>This time, I actually planned things before writing code (way more than I did before, it was pretty overwhelming). For example, in the frontend, I’d sketch out the layouts in PowerPoint, screenshot them, and send them along with specific instructions</p>

<p><em>(I have no idea if this is good practice 🙏 pls forgive me frontend people)</em></p>

<p>This usually meant the AI got it mostly right on the first try, and I only had to make small changes afterwards. For the backend, I clearly defined my MongoDB models in a single prompt, how auth and middleware should work, and asked the AI to lay out a detailed plan for endpoints before generating anything.</p>

<h3 id="more-mistakes">More mistakes</h3>

<p>That said, there were still times where I messed up like the past project. One time I somehow ended up with a file that was almost 1900 lines long (what?). Every little change added up, and I kept telling myself to clean it later.</p>

<figure style="text-align: center; margin: 2em 0;">
  <img src="/assets/images/2025-12-28-vibe-coding/2.png" alt="what" style="max-width: 60%; height: auto; display: block; margin: 0 auto;" />
  
  <figcaption style="font-style: italic; margin-top: 0.5em; color: #666;">
    How did this even happen?
  </figcaption>
  
</figure>

<p>When I finally tried to refactor it (because the AI literally can’t add features without outright corrupting the file), that turned into two straight hours of painful back and forth. The AI kept hallucinating, breaking stuff, and just did weird things in general. It wasn’t a fun time, but there were plenty of lessons learned afterwards.</p>

<h3 id="learning-things-the-hard-way">Learning things the hard way</h3>

<p>That experience forced me to learn some real software engineering lessons the hard way (I guess it’s the best way to learn, after all..?). I started caring more about:</p>
<ul>
  <li>Structure</li>
  <li>Splitting files into managable chunks</li>
  <li>Actually thinking about sustainable coding</li>
</ul>

<p>I learned to stop making giant commits with thousands of lines and with nonsense commit messages like <code class="language-plaintext highlighter-rouge">asfjahsflkhf</code>. Because well, smaller commits just make everything easier to track, organize, and actually debug when things go wrong.</p>

<figure style="text-align: center; margin: 2em 0;">
  <img src="/assets/images/2025-12-28-vibe-coding/1.png" alt="diabolical" style="max-width: 80%; height: auto; display: block; margin: 0 auto;" />
  
  <figcaption style="font-style: italic; margin-top: 0.5em; color: #666;">
    Some of my diabolical commits
  </figcaption>
  
</figure>

<h3 id="prompting">Prompting</h3>

<p>As the project went on, I also got better at prompting. I became more specific, scoped things tighter, and stopped overwhelming the AI with too many things at once. I even started using a separate AI just to help write better prompts.</p>

<p>I also added tests later on, which helped a lot. Tests made it obvious when something broke and gave me more confidence moving forward.</p>

<h3 id="my-overall-experience">My overall experience</h3>

<p>The project was still huge and beyond my experience level. But this time, it actually felt managable. I wasn’t constantly fighting the codebase anymore. It felt like I was actually in control.</p>

<h1 id="5-takeaways">5. Takeaways</h1>

<h3 id="ai-coding-is-still-coding-">AI Coding is still coding !!</h3>

<p>One thing I learned the hard way is that AI assisted coding is still just coding. The only difference is that instead of writing code directly, you’re writing “code” in human language, but still for a machine.</p>

<p>You can’t really expect to make something without hyper-specific details in code, so why would it be any different wth AI coding? If you don’t understand what you’re building, the AI will happily guess for you (and those guesses will add up over time).</p>

<h3 id="does-vibe-coding-actually-work">Does vibe coding actually work?</h3>

<p>Yes (kind of). Even with AI, coding skills, planning, and even basic project management still matter just as much. AI is wonderful at turning <em>clear</em> intent into code and saving you from spending time writing tedious code over and over again, but it cannot decide what should exist in the first place or why. That part still very much needs a human.</p>

<h3 id="will-ai-replace-programmers">Will AI replace programmers?</h3>

<p>Probably not. AI just changes how we develop software. You still need someone to “code” the AI with clear goals, structure, and constraints. Vibe coding can get you started, but it won’t really carry a meaningful project on its own.</p>]]></content><author><name>Hai Hoang</name></author><summary type="html"><![CDATA[1. Intro]]></summary></entry></feed>