Alex Vipond
SHIFT + D

AI-powered frontend web dev

Published over 2 years ago

Okay, I finally got around to writing code with ChatGPT. It's pretty fucking cool.

ChatGPT writing Vue code for my prompt

My favorite take on it so far is Daniel Coulbourne's. He predicts that the codebase of the future will be a bunch of AI prompts checked into a git repo. At build time, AI-powered tools will transform those prompts into frontend code, backend code, unit tests, end to end tests, etc.

He also thinks AI tooling is going to shake up a lot of industries, including web dev, and kill a lot of jobs in the process.

I'm a frontend developer, and here are my two cents.

The state of the art

To get a good sense of the state of the art for frontend development, I prompted ChatGPT to do some cool stuff:

  • Write a ModalDialog React component powered by React Aria and styled with Tailwind
  • Write a useListbox Vue function ref composable that handles keyboard interactions for an accessible multiselect listbox. Then write a similar useGrid composable to power an accessible editable grid.
  • In TypeScript, show how to model a multi-step form as a directed acyclic graph, and write helper functions for traversing and walking the graph
  • Write a markdown-it plugin that supports a custom syntax for including gifs from Giphy in your Markdown file

I would expect an intermediate or advanced frontend web developer to be able to handle each of these tasks.

So let's look at some AI-generated code! All I added was code comments and blog-friendly line breaking.

React Aria modal dialog

Prompts:

  1. Write a modal dialog React component using the React Aria library
  2. Now style it with Tailwind
tl;dr

Tailwind styles were spot on, but this code would throw several runtime and build time errors.

ChatGPT obviously knows exactly what a modal dialog is, and how to sketch one out in React, but the React Aria integration isn't very close to working. I have to imagine React Aria modal dialogs, especially properly reusable ones, are rare in ChatGPT's training dataset.

A human could copy working code from the React Aria docs in a few minutes, and might spend a few days writing and testing a custom ModalDialog that shares patterns, API decisions, and design abstractions with a bespoke UI kit.

Score: 2/5 stars ⭐️⭐️

import { useDialog } from 'react-aria';

// I was intending to get more of a reusable modal dialog component,
// instead of a full modal dialog example with hardcoded markup, but
// I didn't prompt for that.
function MyModal({ isOpen, onClose }) {
  // ✅ This is exactly where you should call the `useDialog` hook
  // ❌ `useDialog` doesn't accept `isOpen` or `onClose` props
  const dialog = useDialog({ isOpen, onClose });

  return (
    <>
      {/*
        ✅ `triggerProps` is the correct variable name for this button
        ✅ Tailwind classes look good
        
        ❌ `triggerProps` gets returned from a `useOverlayTriggerState`
           hook, not `useDialog`
      */}
      <button
        className="bg-blue-500 text-white py-2 px-4 rounded-lg"
        {...dialog.triggerProps}
      >
        Open Modal
      </button>
      <div
        className={
          `fixed top-0 left-0 h-full w-full z-50
          flex items-center justify-center
          ${isOpen ? 'block' : 'hidden'}`
        }
        // ✅ This is the correct place to put `dialog.dialogProps`
        // ✅ Tailwind classes look good, though `z-50` is a weird choice
        {...dialog.dialogProps}
      >
        {/*
          ✅ ChatGPT continues to show good Tailwind knowledge
          ⚠️ In a reusable component, this would be a slot for `children`.
            I'm 100% certain I could better prompt the AI to set that up
            for me instead of hardcoding this markup.
        */}
        <div className="bg-white p-6 rounded-lg">
          <h2 className="text-lg font-medium">Modal Title</h2>
          <p className="mt-4">Modal content goes here...</p>
          <button
            className="bg-red-500 text-white py-2 px-4 rounded-lg"
            // ❌ `closeProps` is not a thing. Opening and closing should
            //    be handled in a separate `ModalOverlay` component.
            {...dialog.closeProps}
          >
            Close
          </button>
        </div>
      </div>
    </>
  );
}

Vue 3 function ref composable

Prompts:

  1. Use Vue 3 to write a useListbox composable that handles ARIA attributes as well as keyboard and pointer interactions for an accessible listbox
  2. Add some logic to the composable to set tabindex on each list item element so that only the active listbox option can receive keyboard focus
  3. Now handle the case where the listbox contains presentational markup, not just listbox options, and listboxRef.value.children is not a reliable way to get a list of all options
  4. Instead of using a .listbox-option class to identify options, define a listItems ref inside the composable. The value of listItems should be an array. Also define a setListItem function that receives an element and an index, and stores the element at the correct index in listItems.value. Include setListItem in the return value, and bind it to the ref attribute of the v-for element
  5. Include the tabindex logic that iterates through the elements in listItems to keep the tabindex up to date
  6. Add multiselect support
  7. Make sure it supports the use case where listItems can change order or length reactively during the component lifecycle
tl;dr

ChatGPT did really nice work with this one, much better than the React Aria modal. That's almost certainly because Vue is much more widely used than React Aria.

It was annoyingly hard to get ChatGPT to write a good function ref composable, the way I like to write them in Baleada. But apart from the intricacies of function refs, the AI really nailed this one.

Score: 4/5 stars ⭐️⭐️⭐️⭐️

import {
  ref,
  onMounted,
  onUnmounted,
  watch
} from 'vue'

export default function useListbox(props) {
  // We ended up with a `useListbox` that only supports multiselect.
  // I was hoping for something that supports both multi and single
  // select, but I didn't explicitly ask for that.
  const selectedIndices = ref(new Set())
  // The `() => ref()` callback sets each item in the array to a
  // reactive ref by default. This is fine, but doesn't really serve
  // any useful purpose in this implementation or any implementation I've
  // ever written myself.
  const listItems = ref(Array.from({
      length: props.listItems.length
    },
    () => ref()
  ))
  const listboxRef = ref()

  // I omitted a bunch of code from this `handleKeydown` function
  // just to keep the blog post length under control,  but it was all
  // good code. ChatGPT clearly knew how to navigate its `selectedIndices`
  // set to add or remove selections.
  //
  // It was checking for the ctrl key to add selections to the set instead
  // of clearing and resetting. I wanted to see support for shift and
  // cmd, but those were not included.
  function handleKeyDown(event) {
    switch (event.key) {
      case 'ArrowUp':
        // ...
        break
      case 'ArrowDown':
        // ...
        break
      case 'Home':
        // ...
        break
      case 'End':
        // ...
        break
    }
  }

  // All of the `handleKeydown` code was similar to what you see hear,
  // it's pretty great stuff.
  function handleClick(event) {
    const newSelectedIndex = listItems.value.indexOf(event.target)
    if (!event.ctrlKey) {
      selectedIndices.value.clear()
    }
    if (selectedIndices.value.has(newSelectedIndex)) {
      selectedIndices.value.delete(newSelectedIndex)
    } else {
      selectedIndices.value.add(newSelectedIndex)
    }
  }

  // I had to coach ChatGPT pretty heavily to get it to write this
  // function ref. It was totally unaware of the function ref feature
  // in Vue, which isn't surprising, considering this code is rare in
  // the training dataset.
  //
  // This is also a weird place to put the code—ideally the function ref
  // would be defined right next to the reactive reference it mutates,
  // to keep things nicely organized. Again, ChatGPT doesn't have enough
  // training data for stuff like this.
  function setListItem(element, index) {
    listItems.value[index].value = element
  }

  // Mounted and unmounted hooks look good. I didn't do any additional
  // prompting, it just knew how and where to add and clean up listeners.
  onMounted(() => {
    listboxRef.value.setAttribute('role', 'listbox')
    listboxRef.value.addEventListener('keydown', handleKeyDown)
    listboxRef.value.addEventListener('click', handleClick)
  })
  onUnmounted(() => {
    listboxRef.value.removeEventListener('keydown', handleKeyDown)
    listboxRef.value.removeEventListener('click', handleClick)
  })

  // All code comments inside this `watch` were written by ChatGPT,
  // so I left them in for you to read. Interesting stuff! This is
  // how ChatGPT is handling the case where list options can change
  // length or order reactively, and should retain selection if that
  // happens.
  //
  // It's good code, but again, it shows a lack of understanding of
  // how function refs work. When you're capturing options via a function
  // ref, you don't need to accept `props.listItems` and react to its
  // changes—you can just watch the `listItems` ref directly for changes
  // to rendered content.
  watch(props.listItems, (newItems) => {
      const diff = newItems.length - listItems.value.length
      if (diff > 0) {
        // if the new listItems is longer than the previous one
        // add new refs to the listItems
        listItems.value = [...listItems.value, ...Array.from({
          length: diff
        }, () => ref())]
      } else if (diff < 0) {
        // if the new listItems is shorter than the previous one
        // remove items from the listItems
        listItems.value = listItems.value.slice(0, diff)
        // remove indices from selectedIndices that are not present anymore
        const newSelectedIndices = new Set()
        for (const index of selectedIndices.value) {
          if (index < newItems.length) {
            newSelectedIndices.add(index)
          }
        }
        selectedIndices.value = newSelectedIndices
      }
    }, {
      deep: true
    }),

    // Setting `tabindex` based on selection is on the right track. For
    // proper accessibility though, you'd only want `tabindex=0` on
    // one list item. Usually it would be on the most recently focused
    // item, and you'd have to maintain a separate piece of reactive state
    // to track the focused item.
    watch(selectedIndices, (newIndices) => {
      listItems.value.forEach((itemRef, index) => {
        itemRef.value.setAttribute("tabindex", newIndices.has(index) ? 0 : -1)
      });
    });

  // API is a good start. It's missing functions to programmatically
  // select items, but I didn't prompt ChatGPT to add those.
  return {
    selectedIndices,
    listItems,
    listboxRef,
    setListItem
  }
}

What I want from AI tooling

I want:

  • .prompt files with syntax highlighting, probably highlighting basic parts of speech (noun, verb, adverb, etc.)
  • A package manager. We need to be able to install collections of prompts that can generate reusable abstractions.
  • A reliable import/export solution so I can make sure the AI is aware of any abstractions I've created or installed
  • Ability to write ad hoc code samples and heavily weight the AI toward my preferred coding style for any given project. This will let us introduce the AI to niche features of whatever other tooling we're building or importing.
  • Visual designs as prompts. Text prompts are great, but we need to be able to draw a layout and have the AI implement it in code.
  • Some sort of caching mechanism, so I can save high-quality code and make sure the AI doesn't try to regenerate it on my next build.

That's what I want from AI tooling to make me more productive.

But I also love the creative process of coding and problem-solving by hand, and I don't want to lose that.

In my spare time, I like to woodwork, and something I've been picking up over the last year or so is hand tool woodworking. I still use power saws for 80-90% of lumber processing, but if you really want a precise, perfect joint between two pieces of wood, you use hand tools.

A sharp chisel or plane is able to remove as little as a thousandth of an inch of material, maybe even less. At the end of a project, a few minutes of hand tool work creates impossibly smooth surfaces and perfect-fitting joints, 10x faster and better than power tools that cost 10x more.

Web dev feels the same to me.

ChatGPT is a power tool, and it will get more expensive as it gets more powerful. In the future, if I feel like it's the right tool for the job, I'll use it, but I almost certainly will finish my projects by hand to get that last layer of precision in my UX and DX.

Most people who buy wood furniture don't know or care what a dovetail joint is, no matter how much time a hand tool woodworker spends perfecting that detail. Most customers won't pay extra for it.

I don't expect the consumer response to less precise, less bespoke, AI-generated software to be any different. But I'm hoping there will still be room in the market for precision coding.

Jobs and income inequality

As AI tools get better, people will lose their jobs, and new jobs will get created. There's a very real chance that AI tools will kill more dev jobs than they create, and contribute to increased income inequality.

Software products will probably get more homogenous, just like furniture, kitchen utensils, toys, cars, electronics, food, fertilizer, livestock, fuel, and everything else has been doing since the Industrial Revolution kicked in.

Like any artisanal product, handwritten code will run better and last longer, but a fully AI-generated app will be good enough to ship to a market where a lot of customers won't recognize or care about the bugs or longevity. Planned software obsolescence will make business sense. Overworked computers will suck more energy.

A few people will make a lot of money, and a lot of people won't.

I'd like to keep my job, and I think the key to doing that is to master the workflow of AI-powered web dev.

I'm betting the workflow will look something like this:

  • Write an AI prompt
  • Assuming the AI takes some time to generate code, switch to another task and write its AI prompt
  • Keep writing prompts until you run out of tasks or an AI responds to a prompt, whichever comes first.
  • Check the AI's work for accuracy. If you see incorrect solutions, bugs, or opportunities for improvement, adjust your prompt.
  • Repeat until all tasks are done
  • By hand, write bespoke abstractions, and polish off rough edges in generated code

If that's what the workflow looks like, here's what I think it takes to thrive:

  • Write clear, precise pseudo-code prompts
  • Quickly read AI-generated code to analyze for accuracy
  • Learn from code inaccuracies—constantly improve your ability to write great pseudo-code
  • Context-switch, quickly and often
  • Stay sharp on your actual coding skills—the best of the best will still need to write code by hand fairly often

These skills aren't radically different from what any good web dev needs to be successful in the industry right now. I'm sure there will be quirks and tricks to get AI to generate exactly the results you want, but the higher-level skillset is the same.

The difference is that devs will use these skills to generate a lot more code, a lot more quickly.

Hopefully this workflow and skillset will keep me on top of those mountains of code, and not buried underneath them.

ON THIS PAGE

AI-powered frontend web devThe state of the artReact Aria modal dialogVue 3 function ref composableWhat I want from AI toolingJobs and income inequality