MoreRSS

site iconNicholas CarliniModify

A research scientist at Google DeepMind working at the intersection of machine learning and computer security.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Nicholas Carlini

Letting Language Models Write my Website

2024-12-25 08:00:00

I let a language model write my bio. It went about as well as you might expect.

You should forecast the future of AI

2024-11-25 08:00:00

The field of AI is progressing much faster than many expected. When things are changing so fast, it can be hard to remember what you thought was impossible just a few years ago, and conversely, what you thought was obviously going to be trivially solved that still hasn't been.

How I use "AI"

2024-08-01 08:00:00

I don't think that "AI" models [a] (by which I mean: large language models) are over-hyped.

Why I attack

2024-06-24 08:00:00

Yesterday I was forwarded a bunch of messages that Prof. Ben Zhao (a computer science professor [a] A full professor with tenure, so I feel entirely within my rights to call him out here. at the University of Chicago) wrote about me on a public Discord server with 15,000 members, including this gem:

(yet another) Broken Adversarial Example Defense at IEEE S&P 2024

2024-05-06 08:00:00

IEEE SP 2024 (one of the top computer security conferences) has, again, accepted an adversarial example defense paper that is broken with simple attacks. It contains claims that are mathematically impossible, does not follow recommended guidance on evaluating adversarial robustness, and its own figures present all the necessary evidence that the evaluation was conducted incorrectly.

My benchmark for large language models

2024-02-19 08:00:00

A benchmark of ~100 tests for language models, collected from actual questions I've asked of language models in the last year.