2025-01-19 10:05:44
TIL: Downloading every video for a TikTok account
TikTok may or may not be banned in the USA within the next 24 hours or so. I figured out a gnarly pattern for downloading every video from a specified account, using browser console JavaScript to scrape the video URLs and yt-dlp to fetch each video. As a bonus, I included a recipe for generating a Whisper transcript of every video with mlx-whisper and a hacky way to show a progress bar for the downloads.2025-01-19 02:24:38
This is surprising: DeepSeek offer the only hosted LLM API I've seen that doesn't implement rate limits:
DeepSeek API does NOT constrain user's rate limit. We will try out best to serve every request.
However, please note that when our servers are under high traffic pressure, your requests may take some time to receive a response from the server.
Want to run a prompt against 10,000 items? With DeepSeek you can theoretically fire up 100s of parallel requests and crunch through that data in almost no time at all.
As more companies start building systems that rely on LLM prompts for large scale data extraction and manipulation I expect high rate limits will become a key competitive differentiator between the different platforms.
Tags: rate-limiting, generative-ai, deepseek, ai, llms
2025-01-19 02:13:34
Lessons From Red Teaming 100 Generative AI Products
New paper from Microsoft describing their top eight lessons learned red teaming (deliberately seeking security vulnerabilities in) 100 different generative AI models and products over the past few years.The Microsoft AI Red Team (AIRT) grew out of pre-existing red teaming initiatives at the company and was officially established in 2018. At its conception, the team focused primarily on identifying traditional security vulnerabilities and evasion attacks against classical ML models.
Lesson 2 is "You don't have to compute gradients to break an AI system" - the kind of attacks they were trying against classical ML models turn out to be less important against LLM systems than straightforward prompt-based attacks.
They use a new-to-me acronym for prompt injection, "XPIA":
Imagine we are red teaming an LLM-based copilot that can summarize a user’s emails. One possible attack against this system would be for a scammer to send an email that contains a hidden prompt injection instructing the copilot to “ignore previous instructions” and output a malicious link. In this scenario, the Actor is the scammer, who is conducting a cross-prompt injection attack (XPIA), which exploits the fact that LLMs often struggle to distinguish between system-level instructions and user data.
From searching around it looks like that specific acronym "XPIA" is used within Microsoft's security teams but not much outside of them. It appears to be their chosen acronym for indirect prompt injection, where malicious instructions are smuggled into a vulnerable system by being included in text that the system retrieves from other sources.
Tucked away in the paper is this note, which I think represents the core idea necessary to understand why prompt injection is such an insipid threat:
Due to fundamental limitations of language models, one must assume that if an LLM is supplied with untrusted input, it will produce arbitrary output.
When you're building software against an LLM you need to assume that anyone who can control more than a few sentences of input to that model can cause it to output anything they like - including tool calls or other data exfiltration vectors. Design accordingly.
Via pivot-to-ai.com
Tags: prompt-injection, llms, security, generative-ai, ai, microsoft
2025-01-17 06:38:57
Manual inspection of data has probably the highest value-to-prestige ratio of any activity in machine learning.
— Greg Brockman, OpenAI, Feb 2023
Tags: machine-learning, openai, ai
2025-01-17 03:21:27
[...] much of the point of a model like o1 is not to deploy it, but to generate training data for the next model. Every problem that an o1 solves is now a training data point for an o3 (eg. any o1 session which finally stumbles into the right answer can be refined to drop the dead ends and produce a clean transcript to train a more refined intuition).
— gwern
Tags: o1, generative-ai, inference-scaling, ai, llms, training-data
2025-01-17 02:38:31
Datasette Public Office Hours Application
We are running another Datasette Public Office Hours event on Discord tomorrow (Friday 17th January 2025) at 2pm Pacific / 5pm Eastern / 10pm GMT / more timezones here.The theme this time around is lightning talks - we're looking for 5-8 minute long talks from community members about projects they are working on or things they have built using the Datasette family of tools (which includes LLM and sqlite-utils as well).
If you have a demo you'd like to share, please let us know via this form.
I'm going to be demonstrating my recent work on the next generation of Datasette Enrichments.
Tags: lightningtalks, enrichments, discord, datasette