While I was at KubeCon, I intentionally didn’t attend any AI talks. But of course, the topic of automating development processes and improving productivity with AI tools is still on the top of my mind.

Let’s face some facts:

  • you will be using generative AI for coding
  • your colleagues will be using generative AI, it’s especially important in projects with a lot of unfamiliar codebase

I discussed this topic with some of my friends and colleagues, and despite mostly positive feedback, there’s still a big problem with generating high-quality code that is easy to maintain, test and read.

In theory, this problem should have been already solved by the previous generation of tools (in my case, it’s mostly VSCode extensions like Cline/Roo Code), but every tool introduced its own abstractions and formats. Now we are all left with the need to rewrite all of these files into <something new>. This time I am trying to be more mindful and not rushing into the rewriting right away after switching fully to Github Copilot CLI.

This is how I actually started my research on how to write skills, compose agents and generally work effectively with Github Copilot CLI. Here is what I found so far.

Videos

Articles

  • Github CLI best practices

    A great article if you are planning to adopt Github Copilot CLI in your development or your team’s/project’s development.

  • Go Modern Guidelines Skill by JetBrains

    Recent Go releases have been adding more and more syntactic sugar and new constructs. This skill helps to find outdated patterns in the existing codebase or generate new code without them.

  • Practical Guide to Evaluating and Testing Agent Skills

    A practical guide on how to test AI skills. I think it would be useful, if you are planning to write your own skills or want to adopt this pattern in your team. Testing is essential for critical execution paths, e.g. it’s working with sensitive data or performing actions in production.