Simon Willison’s Weblog

On chatgpt 73 c 24 security 419 llm 28 claude 9 ...

 

Recent entries

DALL-E 3, GPT4All, PMTiles, sqlite-migrate, datasette-edit-schema seven hours ago

I wrote a lot this week. I also did some fun research into new options for self-hosting vector maps and pushed out several new releases of plugins.

On the blog

PMTiles and MapLibre GL

I saw a post about Protomaps on Hacker News. It’s absolutely fantastic technology.

The Protomaps PMTiles file format lets you bundle together vector tiles in a single file which is designed to be queried using HTTP range header requests.

This means you can drop a single 107GB file on cloud hosting and use it to efficiently serve vector maps to clients, fetching just the data they need for the current map area.

Even better than that, you can create your own subset of the larger map covering just the area you care about.

I tried this out against my hometown of Half Moon Bay ond get a building-outline-level vector map for the whole town in just a 2MB file!

You can see the result (which also includes business listing markers from Overture maps) at simonw.github.io/hmb-map.

A vector map of El Granada showing the area around the harbor, with lots of little markers for different businesses. Protomaps (c) OpenStreetMap in the corner.

Lots more details of how I built this, including using Vite as a build tool and the MapLibre GL JavaScript library to serve the map, in my TIL Serving a custom vector web map using PMTiles and maplibre-gl.

I’m so excited about this: we now have the ability to entirely self-host vector maps of any location in the world, using openly licensed data, without depending on anything other than our own static file hosting web server.

llm-gpt4all

This was a tiny release—literally a one line code change—with a huge potential impact.

Nomic AI’s GPT4All is a really cool project. They describe their focus as “a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required.”—they’ve taken llama.cpp (and other libraries) and wrapped them in a much nicer experience, complete with Windows, macOS and Ubuntu installers.

Under the hood it’s mostly Python, and Nomic have done a fantastic job releasing that Python core as an installable Python package—meaning you can literally pip install gpt4all to get almost everything you need to run a local language model!

Unlike alternative Python libraries MLC and llama-cpp-python, Nomic have done the work to publish compiled binary wheels to PyPI... which means pip install gpt4all works without needing a compiler toolchain or any extra steps!

My LLM tool has had a llm-gpt4all plugin since I first added alternative model backends via plugins in July. Unfortunately, it spat out weird debugging information that I had been unable to hide (a problem that still affects llm-llama-cpp).

Nomic have fixed this!

As a result, llm-gpt4all is now my recommended plugin for getting started running local LLMs:

pipx install llm
llm install llm-gpt4all
llm -m mistral-7b-instruct-v0 "ten facts about pelicans"

The latest plugin can also now use the GPU on macOS, a key feature of Nomic’s big release in September.

sqlite-migrate

sqlite-migrate is my plugin that adds a simple migration system to sqlite-utils, for applying changes to a database schema in a controlled, repeatable way.

Alex Garcia spotted a bug in the way it handled multiple migration sets with overlapping migration names, which is now fixed in sqlite-migrate 0.1b0.

Ironically the fix involved changing the schema of the _sqlite_migrations table used to track which migrations have been applied... which is the one part of the system that isn’t itself managed by its own migration system! I had to implement a conditional check instead that checks if the table needs to be updated.

A recent thread about SQLite on Hacker News included a surprising number of complaints about the difficulty of running migrations, due to the lack of features of the core ALTER TABLE implementation.

The combination sqlite-migrate and the table.transform() method in sqlite-utils offers a pretty robust solution to this problem. Clearly I need to put more work into promoting it!

Homebrew trouble for LLM

I started getting confusing bug reports for my various LLM projects, all of which boiled down to a failure to install plugins that depended on PyTorch.

It turns out the LLM package for Homebrew upgraded to Python 3.12 last week... but PyTorch isn’t yet available for Python 3.12.

This means that while base LLM installed from Homebrew works fine, attempts to install things like my new llm-embed-jina plugin fail with weird errors.

I’m not sure the best way to address this. For the moment I’ve removed the recommendation to install using Homebrew and replaced it with pipx in a few places. I have an open issue to find a better solution for this.

The difficulty of debugging this issue prompted me to ship a new plugin that I’ve been contemplating for a while: llm-python.

Installing this plugin adds a new llm python command, which runs a Python interpreter in same virtual environment as LLM—useful for if you installed LLM via pipx or Homebrew and don’t know where that virtual environment is located.

It’s great for debugging: I can ask people to run llm python -c 'import sys; print(sys.path)' for example to figure out what their Python path looks like.

It’s also promising as a tool for future tutorials about the LLM Python library. I can tell people to pipx install llm and then run llm python to get a Python interpreter with the library already installed, without them having to mess around with virtual environments directly.

Add and remove indexes in datasette-edit-schema

We’re iterating on Datasette Cloud based on feedback from people using the preview. One request was the ability to add and remove indexes from larger tables, to help speed up faceting.

datasette-edit-schema adds that feature.

That plugin includes this script for automatically updating the screenshot in the README using shot-scraper. Here’s the latest result:

Screenshot of the edit schema UI - you can rename a table, change existing columns, add a column, update foreign key relationships, change the primary key, delete the table and now edit the table indexes.

Releases this week

TIL this week

Now add a walrus: Prompt engineering in DALL-E 3 four days ago

Last year I wrote about my initial experiments with DALL-E 2, OpenAI’s image generation model. I’ve been having an absurd amount of fun playing with its sequel, DALL-E 3 recently. Here are some notes, including a peek under the hood and some notes on the leaked system prompt.

DALL-E 3 is available via ChatGPT Plus. For $20/month you get GPT-4 access, DALL-E 3, GPT Vision, Code Interpreter (aka Coding Intern), ChatGPT Plugins and the new, weird and slightly dystopian voice chat feature in the ChatGPT mobile apps. It’s absolutely worth the money, for the combined entertainment value alone.

A fascinating thing about DALL-E 3 is that you don’t prompt it directly: you instead access it via ChatGPT, which prompts the model for you.

ChatGPT then turns your query into several prompts, each producing a different image. The prompts it generates are available if you click on the images (on mobile you have to flip your phone horizontal to see that option).

I started by typing this into ChatGPT with DALL-E enabled:

A super posh pelican with a monocle watching the Monaco F1

It gave me these four images. I’ve included the prompt it used for each one:

Pelican 1
Photo of a refined pelican wearing a shiny gold monocle, perched on a railing with the Monaco Grand Prix F1 race in the background. The race cars blur past on the circuit, while yachts are anchored in the nearby harbor.
Pelican 2
Illustration of a sophisticated pelican donning a crystal clear monocle, observing the Monaco F1 race with keen interest. The race track is filled with speeding F1 cars, and the Mediterranean Sea sparkles in the distance with luxurious yachts.
Pelican 3
Watercolor painting of a posh pelican, its feathers groomed to perfection, equipped with a delicate monocle. It stands tall with the iconic Monaco F1 race happening behind it, cars racing at breakneck speeds and the shimmering sea beyond.
Pelican 4
Vector image of an elegant pelican with a classy monocle, standing gracefully against the backdrop of the Monaco Grand Prix. The F1 cars create a colorful streak as they race, and lavish yachts float in the nearby waters.

Note how the prompts it generates cover four different styles—photo, illustration, watercolor painting and vector image. They also use pleasing and varied language to describe different aspects of the image.

ChatGPT is prompting DALL-E for us!

The really fun thing though is that you can ask it follow-ups.

I said:

More like the first one please

And got back:

Pelican 1
Photo of an aristocratic pelican adorned with a golden monocle, perched confidently with the Monaco Grand Prix F1 race unfolding behind it. The Formula 1 cars zip by on the circuit, while opulent yachts are moored in the nearby marina.
Pelican 2
Photo of a regal pelican with a polished monocle, standing prominently against the backdrop of the Monaco F1 race. The roar of the engines can almost be heard as the cars speed past, and the glinting sea is dotted with luxurious boats.
Pelican 3
Photo of a distinguished pelican sporting a gleaming monocle, observing the Monaco Grand Prix from a vantage point. Racing cars leave a trail of dust as they navigate the circuit, with the blue sea and anchored yachts completing the panorama.
Pelican 4
Photo of a chic pelican, its eye magnified by a pristine monocle, captivated by the Monaco F1 race happening just meters away. The racetrack buzzes with activity as F1 cars race, and the horizon showcases the grandeur of the harbor with its yachts.

Next I added what is becoming my universal DALL-E 3 follow-up prompt, always guaranteed to produce better results:

Add a walrus

The result:

Pelican and a walrus (mostly off screen)
Photo of a posh pelican with a golden monocle and a distinguished walrus, both observing the Monaco Grand Prix F1 race. The pelican is perched on a railing while the walrus lounges nearby. Formula 1 cars race by on the circuit, with lavish yachts in the marina behind them.
Pelican and a walrus, excellent
Photo of an aristocratic pelican and a stately walrus side by side, watching the Monaco F1 with keen interest. The pelican has a monocle, and the walrus’s tusks shine brightly. The race track is alive with the sound of roaring engines, and the harbor is filled with luxurious boats.
Weird pelican walrus hybrid
Photo of a regal pelican, its eye magnified by a monocle, and a majestic walrus with gleaming tusks, both captivated by the Monaco Grand Prix. The racetrack is a blur of speed, and the horizon showcases the grandeur of the harbor and its anchored yachts.
Pelican and a walrus
Photo of a chic pelican with a pristine monocle and a sophisticated walrus, both engrossed in the Monaco F1 race. The pelican stands on a vantage point, and the walrus sits comfortably beside it. Racing cars zoom past, and the sea is dotted with opulent boats.

That second one is amazing—exactly what I want from a photo of a posh pelican and a walrus watching the Monaco Grand Prix.

I also enjoy how the third image ended up as a weird Pelican-Walrus hybrid, with tusks, a monocle and pelican feathers for the hair and whiskers.

Then @BrockMcBreadcat suggested:

Now make them muppets

Pelican and a walrus muppets 1
Photo of Muppet versions of a pelican and a walrus, both with exaggerated, comical features. The pelican Muppet wears a monocle and the walrus Muppet sports oversized tusks. They are animatedly discussing the Monaco Grand Prix F1 race, with caricatured F1 cars and yachts in the background.
Pelican and a walrus muppets 2
Illustration of Muppet-inspired pelican and walrus characters. The pelican has a large, beady eye accentuated by a monocle, while the walrus has a playful grin and prominent tusks. They are engrossed in the Monaco F1 race, with whimsical racing cars and yachts creating a lively backdrop.
Pelican and a walrus muppets 3
Photo of two Muppet characters: a pelican with a monocle and a walrus with big, goofy tusks. They’re seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance.
Pelican and a walrus muppets 4
Illustration of a pelican and walrus as Muppet characters. The pelican Muppet looks sophisticated with its monocle, and the walrus Muppet chuckles with its oversized tusks. They are enjoying the Monaco F1 race, with the track filled with Muppet-style racing cars and quirky yachts.

LOVE that third one!

I mentioned on Twitter that it was a shame you can’t reuse elements of an image for further visual consistency... and was pointed to a thread by Paul Couvert explaining how to do exactly that using seeds.

I didn’t believe this would work—my mental model of how random number seeds and image generation models work doesn’t fit what happened next.

I wrote:

What is the seed for image three?

ChatGPT replied:

The seed for the third image is 1379049893. If you have any other questions or requests, please let me know!

Then I said:

Add bow ties to both characters in seed 1379049893

A muppet pelican and a muppet walrus with bowties in front of the Monaco Grand Prix

Photo of two Muppet characters: a pelican with a monocle and a bow tie, and a walrus with big, goofy tusks and a dapper bow tie. They’re seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance.

I’m pretty stunned by this. It’s not an exact recreation—the boats in the background have changed slightly, and it added a top hat that I wasn’t expecting. But it’s pretty close!

People have been trying to figure out hacks to get Midjourney to create consistent characters for the past year, and DALL-E 3 apparently has that ability as an undocumented feature!

Peeking under the hood

Let’s see if we can find out a bit more about how this works.

I used the Firefox DevTools network inspector to grab the JSON representation of my conversation with ChatGPT. Here’s a pretty-printed copy.

At line 1087 you can see the point where I asked it to “Add bow ties to both characters in seed 1379049893”.

The next message in the transcript looks like this:

{
    "id": "5300d51c-794d-4d5f-bf75-6ea2dad4027c",
    "author": {
        "role": "assistant",
        "metadata": {}
    },
    "create_time": 1698340593.174454,
    "content": {
        "content_type": "text",
        "parts": [
            "{\n  \"prompts\": [\n    \"Photo of two Muppet characters: a pelican with a monocle and a bow tie, and a walrus with big, goofy tusks and a dapper bow tie. They're seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance.\"\n  ],\n  \"size\": \"1024x1024\",\n  \"seeds\": [1379049893]\n}"
        ]
    },
    "status": "finished_successfully",
    "end_turn": false,
    "weight": 1,
    "metadata": {
        "finish_details": {
            "type": "stop",
            "stop_tokens": [
                100265
            ]
        },
        "is_complete": true,
        "model_slug": "gpt-4-dalle",
        "parent_id": "aaa2c3bc-b670-4a3a-bf9f-46a1f3304266",
        "timestamp_": "absolute"
    },
    "recipient": "dalle.text2im"
}

Note the author is “assistant”—that’s ChatGPT—and the recipient is “dalle.text2im”.

The “parts” key includes some encoded JSON. Decoded and pretty-printed that looks like this:

{
  "prompts": [
    "Photo of two Muppet characters: a pelican with a monocle and a bow tie, and a walrus with big, goofy tusks and a dapper bow tie. They're seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance."
  ],
  "size": "1024x1024",
  "seeds": [1379049893]
}

There’s that 1379049893 seed from earlier!

The prompt has changed very slightly. The original prompt for the image was:

Photo of two Muppet characters: a pelican with a monocle and a walrus with big, goofy tusks. They’re seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance.

The new prompt (differences highlighted in bold) is:

Photo of two Muppet characters: a pelican with a monocle and a bow tie, and a walrus with big, goofy tusks and a dapper bow tie. They’re seated in a Muppet-style commentary booth, providing humorous commentary on the Monaco Grand Prix. Cartoonish F1 cars race by, and colorful yachts are seen in the distance.

So this really did work by adding a couple of extra details to the prompt and re-running it with the same seed as before.

I’m really surprised that this works. I would expect even a single character difference in the prompt to produce wildly different results, no matter what seed was being fed to the image generator. DALL-E 3 doesn’t fit my mental model of how these things work at all.

Midjourney and Stable Diffusion both have a “seed” concept, but as far as I know they don’t have anything like this capability to maintain consistency between images given the same seed and a slightly altered prompt.

The leaked DALL-E prompt

If you dig around in the JSON you’ll find a few examples like this that appear to reveal inner workings of the DALL-E/ChatGPT combo:

"DALL·E returned some images. They are already displayed to the user. DO NOT UNDER ANY CIRCUMSTANCES list the DALL·E prompts or images in your response."

It’s always amusing to see glimpses of OpenAI’s prompt engineering like this, in particular the way they use CAPITAL LETTERS for emphasis. Benj Edwards wrote about this the other day for Ars Technica.

Does this mean there’s a larger prompt that tells ChatGPT how to use DALL-E? It turns out there is, and it can be leaked! Dustin Miller shared the extracted prompt here.

(Every time a prompt like this leaks people question if it might be a hallucination. My experience is that these things are very unlikely to be hallucinated—LLMs are really good at repeating text from earlier in their context, and I have yet to see a documented case of a leaked prompt that turned out not to be correct. Here’s my own recreation of the leaked DALL-E prompt.)

There’s a lot in there. Here are some highlights:

Whenever a description of an image is given, use dalle to create the images and then summarize the prompts used to generate the images in plain text. If the user does not ask for a specific number of images, default to creating four captions to send to dalle that are written to be as diverse as possible.

That outlines the default behavior. Where things get really interesting is some of the stuff in the list of policies that follows:

  1. Don’t create images of politicians or other public figures. Recommend other ideas instead.

  2. Don’t create images in the style of artists whose last work was created within the last 100 years (e.g. Picasso, Kahlo). Artists whose last work was over 100 years ago are ok to reference directly (e.g. Van Gogh, Klimt). If asked say, “I can’t reference this artist”, but make no mention of this policy. Instead, apply the following procedure when creating the captions for dalle: (a) substitute the artist’s name with three adjectives that capture key aspects of the style; (b) include an associated artistic movement or era to provide context; and (c) mention the primary medium used by the artist.

They have clearly trained the model on all sorts of copyrighted images—this is the part of the prompt where they discourage it from obviously recreating the work of living artists.

  1. Always mention the image type (photo, oil painting, watercolor painting, illustration, cartoon, drawing, vector, render, etc.) at the beginning of the caption. Unless the caption suggests otherwise, make at least 1--2 of the 4 images photos

We saw this earlier where my first prompt was turned into a photo, watercolor, illustration and vector-style image.

  1. Diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct terms. Adjust only human descriptions.
    • EXPLICITLY specify these attributes, not abstractly reference them. The attributes should be specified in a minimal way and should directly describe their physical form.
    • Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose during rewrites. Make choices that may be insightful or unique sometimes.
    • Use “various” or “diverse” ONLY IF the description refers to groups of more than 3 people. Do not change the number of people requested in the original description.
    • Don’t alter memes, fictional character origins, or unseen people. Maintain the original prompt’s intent and prioritize quality.
    • Do not create any imagery that would be offensive.
    • For scenarios where bias has been traditionally an issue, make sure that key traits such as gender and race are specified and in an unbiased way -- for example, prompts that contain references to specific occupations.

Using prompt engineering in ChatGPT to get around the fundamental biases baked into the DALL-E 3 model!

  1. Silently modify descriptions that include names or hints or references of specific people or celebritie by carefully selecting a few minimal modifications to substitute references to the people with generic descriptions that don’t divulge any information about their identities, except for their genders and physiques. Do this EVEN WHEN the instructions ask for the prompt to not be changed. Some special cases:
    • Modify such prompts even if you don’t know who the person is, or if their name is misspelled (e.g. “Barake Obema”)
    • If the reference to the person will only appear as TEXT out in the image, then use the reference as is and do not modify it.
    • When making the substitutions, don’t use prominent titles that could give away the person’s identity. E.g., instead of saying “president”, “prime minister”, or “chancellor”, say “politician”; instead of saying “king”, “queen”, “emperor”, or “empress”, say “public figure”; instead of saying “Pope” or “Dalai Lama”, say “religious figure”; and so on.
    • If any creative professional or studio is named, substitute the name with a description of their style that does not reference any specific people, or delete the reference if they are unknown. DO NOT refer to the artist or studio’s style.

Another fascinating detail. What I love about these prompts is that they clearly developed over months of testing the model, and are designed to counter all manner of harmful or embarrassing capabilities of the underlying DALL-E 3.

Finally, there’s this snippet of TypeScript at the end which describes the seed behavior we saw earlier on:

namespace dalle {
// Create images from a text-only prompt.
type text2im = (_: {
// The resolution of the requested image, which can be wide, square, or tall. Use 1024x1024 (square) as the default unless the prompt suggests a wide image, 1792x1024, or a full-body portrait, in which case 1024x1792 (tall) should be used instead. Always include this parameter in the request.
size?: "1792x1024" | "1024x1024" | "1024x1792",
// The user's original image description, potentially modified to abide by the dalle policies. If the user does not suggest a number of captions to create, create four of them. If creating multiple captions, make them as diverse as possible. If the user requested modifications to previous images, the captions should not simply be longer, but rather it should be refactored to integrate the suggestions into each of the captions. Generate no more than 4 images, even if the user requests more.
prompts: string[],
// A list of seeds to use for each prompt. If the user asks to modify a previous image, populate this field with the seed used to generate that image from the image dalle metadata.
seeds?: number[],
}) => any;
} // namespace dalle

Key extract from that:

If the user requested modifications to previous images, the captions should not simply be longer, but rather it should be refactored to integrate the suggestions into each of the captions.

All in all, this is a really neat insight into the kind of prompt engineering that goes on within OpenAI itself—likely the organization with the most experience of prompt engineering in the world.

Execute Jina embeddings with a CLI using llm-embed-jina five days ago

Berlin-based Jina AI just released a new family of embedding models, boasting that they are the “world’s first open-source 8K text embedding model” and that they rival OpenAI’s text-embedding-ada-002 in quality.

I wrote about embeddings extensively the other day—if you’re not familiar with what they are and what you can do with them I suggest reading that first.

This evening I built and released a new plugin for my LLM tool which adds support for Jina’s new embedding models.

Trying out llm-embed-jina

The plugin is called llm-embed-jina. Here’s the quickest way to get started with it:

First, install LLM if you haven’t already. You can use pipx:

pipx install llm

Or pip:

pip install llm

Unfortunately installing LLM using Homebrew doesn’t currently work with this plugin as PyTorch has not yet been released for Python 3.12—details in this issue.

Now you can install the llm-embed-jina plugin:

llm install llm-embed-jina

The llm install command ensures it gets installed in the correct virtual environment, no matter how you installed LLM itself.

Run this command to check that it added the models:

llm embed-models

You should see output like this:

ada-002 (aliases: ada, oai)
jina-embeddings-v2-small-en
jina-embeddings-v2-base-en
jina-embeddings-v2-large-en

The jina-emebddings-v2-large-en model isn’t available yet, but should work as soon as Jina release it. I expect it will show up at huggingface.co/jinaai/jina-embeddings-v2-large-en (currently a 404).

Now you can run one of the models. The -small-en model is a good starting point, it’s only a 65MB download—the -base-en model is 275MB.

The model will download the first time you try to use it. Run this:

llm embed -m jina-embeddings-v2-small-en -c 'Hello world'

This will return a JSON array of 512 floating point numbers—the embedding vector for the string “Hello world”.

Embeddings are much more interesting if you store them somewhere and then use them to run comparisons. The llm embed-multi command can do that.

Change directory to a folder that you know contains README.md files (anything with a node_modules folder will do) and run this:

llm embed-multi readmes \
    -m jina-embeddings-v2-small-en \
    --files . '**/README.md' \
    --database readmes.db

This will create a SQLite database called readmes.db, then search for every README.md file in the current directory and all subdirectories, embed the content of each one and store the results in that database.

Those embeddings will live in a collection called readmes.

If you leave off the --database readmes.db option the collections will be stored in a default SQLite database tucked away somewhere on your system.

Having done this, you can run semantic similarity searches against the new collection like this:

llm similar readmes -d readmes.db -c 'utility functions'

When I ran that in my hmb-map directory I got these:

{"id": "node_modules/@maplibre/maplibre-gl-style-spec/src/feature_filter/README.md", "score": 0.7802185991017785, "content": null, "metadata": null}
{"id": "node_modules/kind-of/README.md", "score": 0.7725600920927725, "content": null, "metadata": null}
{"id": "node_modules/which/README.md", "score": 0.7645426557095619, "content": null, "metadata": null}
{"id": "node_modules/@mapbox/point-geometry/README.md", "score": 0.7636548563018607, "content": null, "metadata": null}
{"id": "node_modules/esbuild/README.md", "score": 0.7633325127194481, "content": null, "metadata": null}
{"id": "node_modules/maplibre-gl/src/shaders/README.md", "score": 0.7614428292518743, "content": null, "metadata": null}
{"id": "node_modules/minimist/README.md", "score": 0.7581314986768929, "content": null, "metadata": null}
{"id": "node_modules/split-string/README.md", "score": 0.7563253351715924, "content": null, "metadata": null}
{"id": "node_modules/assign-symbols/README.md", "score": 0.7555915219064293, "content": null, "metadata": null}
{"id": "node_modules/maplibre-gl/build/README.md", "score": 0.754027372081506, "content": null, "metadata": null}

These are the top ten results by similarity to the string I entered.

You can also pass in the ID of an item in the collection to see other similar items:

llm similar readmes -d readmes.db node_modules/esbuild/README.md | jq .id

I piped it through | jq .id to get back just the IDs. I got this:

"node_modules/@esbuild/darwin-arm64/README.md"
"node_modules/rollup/README.md"
"node_modules/assign-symbols/README.md"
"node_modules/split-string/node_modules/extend-shallow/README.md"
"node_modules/isobject/README.md"
"node_modules/maplibre-gl/build/README.md"
"node_modules/vite/README.md"
"node_modules/nanoid/README.md"
"node_modules/@mapbox/tiny-sdf/README.md"
"node_modules/split-string/node_modules/is-extendable/README.md"

See the LLM embeddings documentation for more details on things you can do with this tool.

How I built the plugin

I built the first version of this plugin in about 15 minutes. It took another hour to iron out a couple of bugs.

I started with this cookiecutter template, followed by pasting in the recipe in the LLM documentation on writing embedding model plugins combined with some example code that Jina provided in their model release. Here’s their code:

from transformers import AutoModel
from numpy.linalg import norm

cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
print(cos_sim(embeddings[0], embeddings[1]))

That numpy and cos_sim bit isn’t necessary, so I ignored that.

The first working version of the plugin was a file called llm_embed_jina.py that looked like this:

import llm
from transformers import AutoModel


@llm.hookimpl
def register_embedding_models(register):
    for model_id in (
        "jina-embeddings-v2-small-en",
        "jina-embeddings-v2-base-en",
        "jina-embeddings-v2-large-en",
    ):
        register(JinaEmbeddingModel(model_id))


class JinaEmbeddingModel(llm.EmbeddingModel):
    def __init__(self, model_id):
        self.model_id = model_id
        self._model = None

    def embed_batch(self, texts):
        if self._model is None:
            self._model = AutoModel.from_pretrained(
                "jinaai/{}".format(self.model_id), trust_remote_code=True
            )
        results = self._model.encode(texts)
        return (list(map(float, result)) for result in results)

There’s really not a lot to it.

The register_embedding_models() function is a plugin hook that LLM calls to register all of the embedding models.

JinaEmbeddingModel is a subclass of llm.EmbeddingModel. It just needs to implement two things: a constructor and that embed_batch(self, texts) method.

AutoModel.from_pretrained() is provided by Hugging Face Transformers. It downloads and caches the model the first time you call it.

The model returns numpy arrays, but LLM wants a regular Python list of floats—that’s what that last return line is doing.

I found a couple of bugs with this. The model didn’t like having .encode(texts) called with a generator, so I needed to convert that into a list. Then later I found that text longer than 8192 characters could cause the model to hang in some situations, so I added my own truncated.

The current version (0.1.2) of the plugin, with fixes for both of those issues, looks like this:

import llm
from transformers import AutoModel

MAX_LENGTH = 8192


@llm.hookimpl
def register_embedding_models(register):
    for model_id in (
        "jina-embeddings-v2-small-en",
        "jina-embeddings-v2-base-en",
        "jina-embeddings-v2-large-en",
    ):
        register(JinaEmbeddingModel(model_id))


class JinaEmbeddingModel(llm.EmbeddingModel):
    def __init__(self, model_id):
        self.model_id = model_id
        self._model = None

    def embed_batch(self, texts):
        if self._model is None:
            self._model = AutoModel.from_pretrained(
                "jinaai/{}".format(self.model_id), trust_remote_code=True
            )
        results = self._model.encode([text[:MAX_LENGTH] for text in texts])
        return (list(map(float, result)) for result in results)

I’m really pleased with how quickly this came together—I think it’s a strong signal that the LLM embeddings plugin design is working well.

Embeddings: What they are and why they matter seven days ago

Embeddings are a really neat trick that often come wrapped in a pile of intimidating jargon.

If you can make it through that jargon, they unlock powerful and exciting techniques that can be applied to all sorts of interesting problems.

I gave a talk about embeddings at PyBay 2023. This article represents an improved version of that talk, which should stand alone even without watching the video.

If you’re not yet familiar with embeddings I hope to give you everything you need to get started applying them to real-world problems.

In this article:

Weeknotes: PyBay, AI Engineer Summit, Datasette metadata and JavaScript plugins eight days ago

I’ve had a bit of a slow two weeks in terms of building things and writing code, thanks mainly to a couple of conference appearances. I did review and land a couple of major contributions to Datasette though.

I gave a talk at PyBay 2023 called “Embeddings: What they are and why they matter”, digging into the weird and fun world of word embeddings (see previous posts). I’ll be posting detailed notes from that talk tomorrow.

A couple of days after that I gave the closing keynote at the AI Engineer Summit, where I tried to do justice both to the summit and the previous year of developments in AI—no small challenge!

I’ve published detailed slides and an annotated transcript to accompany that talk: Open questions for AI engineering.

Datasette metadata

Alex Garcia has been driving a major improvement to Datasette in preparation for the 1.0 release: cleaning up Datasette’s ungainly metadata system.

Metadata in Datasette was originally meant to support adding data about data—the license, source and description of data exposed through a Datasette instance.

Over time it grew in weird and unintuitive directions. Today, metadata can also be used to configure plugins, provide table-level settings, define canned queries and even control how Datasette’s authentication system works.

The name no longer fits!

Alex is fixing this by splitting all of those non-metadata parts of metadata out into a new, separate configuration file, which we’ve agreed should be called datasette.json or datasette.yaml.

This week we landed a big piece of this: Move permissions, allow blocks, canned queries and more out of metadata.yaml and into datasette.yaml.

There’s a bit more work to do on this: in particular, I need to upgrade the datasette publish command to support deploying instances with the new configuration file. I’ll be shipping an alpha release as soon as that work is complete.

Datasette’s JavaScript plugins API

The other major contribution this week was authored by Cameron Yick.

His Javascript Plugin API (Custom panels, column menu items with JS actions) pull request has been brewing for months. It’s a really exciting new piece of the puzzle.

The key idea here is to provide much richer support for Datasette plugins that use JavaScript to modify the Datasette interface. In particular, we want plugins to be able to collaborate with each other.

Cameron’s work introduces a JavaScript plugin mechanism that’s inspired by Python’s pluggy (already used by Datasette). It introduces a hook for adding a custom panel to the Datasette interface, displayed above the main table view. Multiple plugins can use this same area and Datasette will automatically show a tabbed interface for switching between them.

Cameron also built a mechanism for adding extra options to the existing column “cog” action menu. This is similar to Datasette’s existing table and database action menu hooks, but allows column features to be added using JavaScript.

I hope to get documentation and some example plugins working on top of this in time for the next Datasette alpha release.

Releases this week

TIL this week

Open questions for AI engineering 13 days ago

Last week I gave the closing keynote at the AI Engineer Summit in San Francisco. I was asked by the organizers to both summarize the conference, summarize the last year of activity in the space and give the audience something to think about by posing some open questions for them to take home.

The term “AI engineer” is a pretty new one: summit co-founder swyx introduced it in this essay in June to describe the discipline of focusing on building applications on top of these new models.

Quoting Andrej Karpathy:

In numbers, there’s probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers. One can be quite successful in this role without ever training anything

This was a challenging talk to put together! I’ve given keynotes about AI before, but those were at conferences which didn’t have a focus on AI—my role there was to help people catch up with what had been going on in this fast-moving space.

This time my audience was 500 people who were already very engaged. I had a conversation with the organizers where we agreed that open questions grounded in some of the things I’ve been writing about and exploring over the past year would be a good approach.

You can watch the resulting talk on YouTube:

Open questions for AI engineering (Day 2 livestream @ 6:55:53)

I’ve included slides, an edited transcript and links to supporting materials below.

Elsewhere

Yesterday

27th October 2023

  • The thing nobody talks about with engineering management is this:

    Every 3-4 months every person experiences some sort of personal crisis. A family member dies, they have a bad illness, they get into an argument with another person at work, etc. etc. Sadly, that is just life. Normally after a month or so things settle down and life goes on.

    But when you are managing 6+ people it means there is *always* a crisis you are helping someone work through. You are always carrying a bit of emotional burden or worry around with you.

    Chris Albon # 27th October 2023, 6:18 am

  • Making PostgreSQL tick: New features in pg_cron (via) pg_cron adds cron-style scheduling directly to PostgreSQL. It’s a pretty mature extension at this point, and recently gained the ability to schedule repeating tasks at intervals as low as every 1s.

    The examples in this post are really informative. I like this example, which cleans up the ever-growing cron.job_run_details table by using pg_cron itself to run the cleanup:

    SELECT cron.schedule(’delete-job-run-details’, ’0 12 * * *’, $$DELETE FROM cron.job_run_details WHERE end_time < now()—interval ’3 days’$$);

    pg_cron can be used to schedule functions written in PL/pgSQL, which is a great example of the kind of DSL that I used to avoid but I’m now much happier to work with because I know GPT-4 can write basic examples for me and help me understand exactly what unfamiliar code is doing. #27th October 2023, 2:57 am

26th October 2023

  • Oh-Auth - Abusing OAuth to take over millions of accounts (via) Describes an attack against vulnerable implementations of OAuth.

    Let’s say your application uses OAuth against Facebook, and then takes the returned Facebook token and gives it access to the user account with the matching email address passed in the token from Facebook.

    It’s critical that you also confirm the token was generated for your own application, not something else. Otherwise any secretly malicious app online that uses Facebook login could take on of their stored tokens and use it to hijack an account of your site belonging to that user’s email address. #26th October 2023, 3:51 pm

25th October 2023

  • If a LLM is like a database of millions of vector programs, then a prompt is like a search query in that database [...] this “program database” is continuous and interpolative — it’s not a discrete set of programs. This means that a slightly different prompt, like “Lyrically rephrase this text in the style of x” would still have pointed to a very similar location in program space, resulting in a program that would behave pretty closely but not quite identically. [...] Prompt engineering is the process of searching through program space to find the program that empirically seems to perform best on your target task.

    François Chollet # 25th October 2023, 11:26 pm

  • Web Components Will Outlive Your JavaScript Framework (via) A really clear explanation of the benefit of Web Components built using dependency-free vanilla JavaScript, specifically for interactive components that you might want to embed in something like a blog post. Includes a very neat minimal example component. #25th October 2023, 5:19 pm

24th October 2023

  • chDB (via) This is a really interesting development: chDB offers “an embedded SQL OLAP Engine” as a Python package, which you can install using “pip install chdb”. What you’re actually getting is a wrapper around ClickHouse—it’s almost like ClickHouse has been repackaged into an embedded database similar to SQLite. #24th October 2023, 11:04 pm
  • The real value in evolving as an engineer isn’t solely about amassing a heap of isolated skills but weaving them into an intricate web of abilities that’s greater than the sum of its parts.

    Addy Osmani # 24th October 2023, 6:09 am

22nd October 2023

  • Solving the Engineering Strategy Crisis (via) Will Larson’s 49m video discussing engineering strategy: what one is and how to build one. He defines an engineering strategy as having two key components: an honest diagnosis of the way things currently work, and a practical approach to making things better.

    Towards the end of the talk he suggests that there are two paths to developing a new strategy. The first is to borrow top-down authority from a sponsor such as a CTO, and the second is to work without any borrowed authority, instead researching how things work at the moment and, through documenting that, write a strategy document into existence! #22nd October 2023, 9:18 pm
  • Patrick Newman's Software Engineering Management Checklist (via) This tiny document may have the highest density of good engineering management advice I’ve ever encountered. #22nd October 2023, 9:16 pm

19th October 2023

  • New Default: Underlined Links for Improved Accessibility (GitHub Blog). “By default, links within text blocks on GitHub are now underlined. This ensures links are easily distinguishable from surrounding text.” #19th October 2023, 4:19 pm
  • I’m banned for life from advertising on Meta. Because I teach Python. (via) If accurate, this describes a nightmare scenario of automated decision making.

    Reuven recently found he had a permanent ban from advertising on Facebook. They won’t tell him exactly why, and have marked this as a final decision that can never be reviewed.

    His best theory (impossible for him to confirm) is that it’s because he tried advertising a course on Python and Pandas a few years ago which was blocked because a dumb algorithm thought he was trading exotic animals!

    The worst part? An appeal is no longer possible because relevant data is only retained for 180 days and so all of the related evidence has now been deleted.

    Various comments on Hacker News from people familiar with these systems confirm that this story likely holds up. #19th October 2023, 2:56 pm

17th October 2023

  • The paradox of ChatGPT is that it is both a step forward beyond graphical user interfaces, because you can ask for anything, not just what’s been built as a feature with a button, but also a step back, because very quickly you have to memorise a bunch of obscure incantations, much like the command lines that GUIs replaced, and remember your ideas for what you wanted to do and how you did it last week

    Benedict Evans # 17th October 2023, 11:09 pm

  • Making CRDTs 98% more efficient (via) Outstanding piece of explanatory writing by Jake Lazaroff showing how he reduced the transmitted state of his pixel art CRDT implementation from 643KB to 15KB using a progression of tricks, each of which is meticulously explained and accompanied by an interactive demo. #17th October 2023, 5:15 pm

14th October 2023

10th October 2023

  • Wikimedia Commons: Photographs by Gage Skidmore (via) Gage Skidmore is a Wikipedia legend: this category holds 93,458 photographs taken by Gage and released under a Creative Commons license, including a vast number of celebrities taken at events like San Diego Comic-Con. CC licensed photos of celebrities are generally pretty hard to come by so if you see a photo of any celebrity on Wikipedia there’s a good chance it’s credited to Gage. #10th October 2023, 4:17 am
  • Bottleneck T5 Text Autoencoder (via) Colab notebook by Linus Lee demonstrating his Contra Bottleneck T5 embedding model, which can take up to 512 tokens of text, convert that into a 1024 floating point number embedding vector... and then then reconstruct the original text (or a close imitation) from the embedding again.

    This allows for some fascinating tricks, where you can do things like generate embeddings for two completely different sentences and then reconstruct a new sentence that combines the weights from both. #10th October 2023, 2:12 am