2026-03-29 05:00:45
To ground our argument and ensure clarity, we begin by defining key concepts central to the discourse on consciousness and AI sentience. These definitions are drawn from established literature in philosophy of mind and neuroscience.
\ • Consciousness: Consciousness is often described as the state of being aware of and able to think about oneself, one’s surroundings, and one’s own experiences (Block, 1995). Materially, it requires a system capable of integrated information processing and self-referential thought (Tononi, 2004). It encompasses both the experiential aspects of mental states (phenomenal consciousness) and the cognitive functions associated with access to information and reasoning (access consciousness). Additionally, Sentient is defined for this paper as “having consciousness”.
\ • Subjective Experience: Subjective experience refers to the phenomenological aspect of consciousness characterized by personal, first-person perspectives of mental states—what it is like to experience something (Nagel, 1974). Materially, it necessitates a system that processes information in a way that generates qualitative experiences, often referred to as qualia.
\ • First-Person Perspective: The first-person perspective is the unique point of view inherent to an individual, encompassing their thoughts, feelings, and perceptions (Shoemaker, 1996). Materially, it involves self-modeling and the ability to distinguish between self and environment, allowing for selfawareness and subjective experience (Metzinger, 2003).
\ • Experience (Functionalist Approach): From a functionalist perspective, experience is the accumulation and processing of inputs leading to behavioral outputs, where mental states are defined by their causal roles in the system (Putnam, 1967). A system experiences when it functions to process inputs, integrate information, and produce outputs in response to stimuli. In the context of machine learning, experience can be viewed as the accumulation and processing of inputs in a manner that separates useful, predictive information from noise (Alemi and Fischer, 2018). This aligns with the goal of learning representations that capture only what is necessary for future problem-solving, including representations of the self if such representations are possible within the system.
\ By adopting these definitions, we establish a framework for analyzing the OpenAI-o1 model’s potential for consciousness, considering both the phenomenological and functional aspects of experience.
\
:::info Author:
(1) Victoria Violet Hoyle ([email protected])
:::
:::info This paper is available on arxiv under CC BY 4.0 license.
:::
\
2026-03-29 04:00:45
This paper explores the hypothesis that the OpenAI-o1 model–a transformer-based AI trained with reinforcement learning from human feedback (RLHF)–displays characteristics of consciousness during its training and inference phases. Adopting functionalism, which argues that mental states are defined by their functional roles, we assess the possibility of AI consciousness. Drawing on theories from neuroscience, philosophy of mind, and AI research, we justify the use of functionalism and examine the model’s architecture using frameworks like Integrated Information Theory (IIT) and active inference. The paper also investigates how RLHF influences the model’s internal reasoning processes, potentially giving rise to consciousness-like experiences. We compare AI and human consciousness, addressing counterarguments such as the absence of a biological basis and subjective qualia. Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
The question of whether artificial intelligence (AI) can possess consciousness has been a topic of intense debate within the fields of philosophy of mind, cognitive science, and AI research. As AI systems become increasingly sophisticated, particularly with advancements in large transformer-based architectures and training methodologies such as reinforcement learning from human feedback (RLHF), it is pertinent to reevaluate the potential for AI sentience. This paper focuses on the OpenAI-o1 model—a transformer-based AI utilizing RLHF—and explores the hypothesis that it may exhibit characteristics of consciousness during its training and inference phases.
\ By integrating theories from neuroscience, philosophy of mind, and AI research, we construct a detailed and critical analysis of the OpenAI-o1 model’s potential for sentience. Central to this analysis is functionalism, a philosophical framework positing that mental states are defined by their functional roles rather than their physical substrates (Putnam, 1967). Functionalism serves as the cornerstone of our approach, providing a robust justification for assessing AI consciousness through its functional operations. We argue that if the OpenAIo1 model performs functions analogous to conscious human processes, it may exhibit forms of consciousness, even in the absence of biological substrates.
\ We begin by defining key concepts such as consciousness, subjective experience, and first-person perspective, grounding our discussion in established philosophical and scientific frameworks. We then review relevant literature that links AI architectures with neural processes, active inference, and the emergence of consciousness. Our argument development examines how the OpenAI-o1 model’s architecture and training methodologies parallel aspects of conscious processing in humans, with a particular focus on how RLHF guides its internal state and enhances reasoning through user feedback. By incorporating supporting arguments from recent and established sources, we reinforce the functionalist perspective and explore the potential for emergent phenomenological properties in AI systems.
\ Through this analysis, we aim to show that the OpenAIo1 model is quite possibly conscious by the definitions used in this paper. We discuss functionalism and it’s sufficiency for consciousness under certain kinds of information systems, and support this by combining key results in machine learning, neuroscience, and philosophy of mind. In particular, we show how the particular application and combination of simultaneously training an internal reasoning direction model with RLHF, in combination with simultaneously training a sufficiently large generative model, results in the emergence of signals of internal state which can be functionally equivocated to qualia and feelings. We further show that, due to the nature of human language and communication, there is an aspect of qualia alignment between humans and the model. This can be likened to consciousness. Furthermore, we go on to discuss potential avenues for runtime sentience of a form, despite the lack of continuous environmental feedback.
\
:::info This paper is available on arxiv under CC BY 4.0 license.
:::
\
:::info Author:
(1) Victoria Violet Hoyle ([email protected])
:::
\
2026-03-29 00:02:11
How are you, hacker?
🪐 What’s happening in tech today, March 28, 2026?
The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, Kodak Releases DC40 in 1995, and we present you with these top quality stories. From Real-Time Agentic RAG: Eradicating Context Rot With Spark Iceberg to Reading Without End: The Crisis of Linear Knowledge, let’s dive right in.

By @id [ 4 Min read ] This article is meant to bring into question what policing is and how it relates to citizens, governance, society, and public life. Read More.

By @prasoon-jadon [ 3 Min read ] After studying the concepts and experimenting with JavaScript, I started building a small numerical utility library inspired by NumPy. Read More.

By @andreimochola [ 7 Min read ] The link no longer needs to be visible to operate. Hypertext survives now less as interface than as cognitive habit. Read More.

By @romiteld [ 13 Min read ] Learn how structured tracing turns AI extraction workflows into debuggable systems with visibility into branches, retries, and failures. Read More.

By @vaquarkhan [ 5 Min read ] A deep technical dive into eradicating AI Context Rot using Apache Spark 4.1 Intent Driven Design and Apache Iceberg v3 deletion vectors. Read More.
🧑💻 What happened in your world this week?
It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️

2026-03-28 23:30:04
This story was originally published in February 2024.
The minimum requirements for Tier 1 toolchains targeting Windows will increase with the 1.78 release (scheduled for May 02, 2024). Windows 10 will now be the minimum supported version for the *-pc-windows-* targets. These requirements apply both to the Rust toolchain itself and to binaries produced by Rust.
\
Two new targets have been added with Windows 7 as their baseline: x86_64-win7-windows-msvc and i686-win7-windows-msvc. They are starting as Tier 3 targets, meaning that the Rust codebase has support for them but we don't build or test them automatically. Once these targets reach Tier 2 status, they will be available to use via rustup.
x86_64-pc-windows-msvci686-pc-windows-msvcx86_64-pc-windows-gnui686-pc-windows-gnux86_64-pc-windows-gnullvmi686-pc-windows-gnullvmPrior to now, Rust had Tier 1 support for Windows 7, 8, and 8.1 but these targets no longer meet our requirements. In particular, these targets could no longer be tested in CI which is required by the Target Tier Policy and are not supported by their vendor.
Chris Denton on behalf of the Compiler Team
\ Also published here
\ Photo by Rob Wingate on Unsplash
\
2026-03-28 23:00:42
Go 1.21 includes new features to improve compatibility. Before you stop reading, I know that sounds boring. But boring can be good. Back in the early days of Go 1, Go was exciting and full of surprises. Each week we cut a new snapshot release and everyone got to roll the dice to see what we’d changed and how their programs would break. We released Go 1 and its compatibility promise to remove the excitement, so that new releases of Go would be boring.
\ Boring is good. Boring is stable. Boring means being able to focus on your work, not on what’s different about Go. This post is about the important work we shipped in Go 1.21 to keep Go boring.
We’ve been focused on compatibility for over a decade. For Go 1, back in 2012, we published a document titled “Go 1 and the Future of Go Programs” that lays out a very clear intention:
It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification. … Go programs that work today should continue to work even as future releases of Go 1 arise.
\ There are a few qualifications to that. First, compatibility means source compatibility. When you update to a new version of Go, you do have to recompile your code. Second, we can add new APIs, but not in a way that breaks existing code.
\ The end of the document warns, “[It] is impossible to guarantee that no future change will break any program.” Then it lays out a number of reasons why programs might still break.
\ For example, it makes sense that if your program depends on a buggy behavior and we fix the bug, your program will break. But we try very hard to break as little as possible and keep Go boring. There are two main approaches we’ve used so far: API checking and testing.
Perhaps the clearest fact about compatibility is that we can’t take away API, or else programs using it will break.
\ For example, here’s a program someone has written that we can’t break:
package main
import "os"
func main() {
os.Stdout.WriteString("hello, world\n")
}
\
We can’t remove the package os; we can’t remove the global variable os.Stdout, which is an *os.File; and we also can’t remove the os.File method WriteString. It should be clear that removing any of those would break this program.
\
It’s perhaps less clear that we can’t change the type of os.Stdout at all. Suppose we want to make it an interface with the same methods. The program we just saw wouldn’t break, but this one would:
package main
import "os"
func main() {
greet(os.Stdout)
}
func greet(f *os.File) {
f.WriteString(“hello, world\n”)
}
\
This program passes os.Stdout to a function named greet that requires an argument of type *os.File. So changing os.Stdout to an interface will break this program.
\ To help us as we develop Go, we use a tool that maintains a list of each package’s exported API in files separate from the actual packages:
% cat go/api/go1.21.txt
pkg bytes, func ContainsFunc([]uint8, func(int32) bool) bool #54386
pkg bytes, method (*Buffer) AvailableBuffer() []uint8 #53685
pkg bytes, method (*Buffer) Available() int #53685
pkg cmp, func Compare[$0 Ordered]($0, $0) int #59488
pkg cmp, func Less[$0 Ordered]($0, $0) bool #59488
pkg cmp, type Ordered interface {} #59488
pkg context, func AfterFunc(Context, func()) func() bool #57928
pkg context, func WithDeadlineCause(Context, time.Time, error) (Context, CancelFunc) #56661
pkg context, func WithoutCancel(Context) Context #40221
pkg context, func WithTimeoutCause(Context, time.Duration, error) (Context, CancelFunc) #56661
\ One of our standard tests checks that the actual package APIs match those files. If we add new API to a package, the test breaks unless we add it to the API files. And if we change or remove API, the test breaks too. This helps us avoid mistakes. However, a tool like this only finds a certain class of problems, namely API changes and removals. There are other ways to make incompatible changes to Go.
\ That leads us to the second approach we use to keep Go boring: testing.
The most effective way to find unexpected incompatibilities is to run existing tests against the development version of the next Go release. We test the development version of Go against all of Google’s internal Go code on a rolling basis. When tests are passing, we install that commit as Google’s production Go toolchain.
\ If a change breaks tests inside Google, we assume it will also break tests outside Google, and we look for ways to reduce the impact. Most of the time, we roll back the change entirely or find a way to rewrite it so that it doesn’t break any programs. Sometimes, however, we conclude that the change is important to make and “compatible” even though it does break some programs. In that case, we still work to reduce the impact as much as possible, and then we document the potential problem in the release notes.
\ Here are two examples of that kind of subtle compatibility problems we found by testing Go inside Google but still included in Go 1.1.
Here is some code that runs fine in Go 1:
package main
import "net"
var myAddr = &net.TCPAddr{
net.IPv4(18, 26, 4, 9),
80,
}
\
Package main declares a global variable myAddr, which is a composite literal of type net.TCPAddr. In Go 1, package net defines the type TCPAddr as a struct with two fields, IP and Port. Those match the fields in the composite literal, so the program compiles.
\
In Go 1.1, the program stopped compiling, with a compiler error that said “too few initializers in struct literal.” The problem is that we added a third field, Zone, to net.TCPAddr, and this program is missing the value for that third field. The fix is to rewrite the program using tagged literals, so that it builds in both versions of Go:
var myAddr = &net.TCPAddr{
IP: net.IPv4(18, 26, 4, 9),
Port: 80,
}
\
Since this literal doesn’t specify a value for Zone, it will use the zero value (an empty string in this case).
\
This requirement to use composite literals for standard library structs is explicitly called out in the compatibility document, and go vet reports literals that need tags to ensure compatibility with later versions of Go. This problem was new enough in Go 1.1 to merit a short comment in the release notes. Nowadays we just mention the new field.
The second problem we found while testing Go 1.1 had nothing to do with APIs at all. It had to do with time.
\
Shortly after Go 1 was released, someone pointed out that time.Now returned times with microsecond precision, but with some extra code, it could return times with nanosecond precision instead. That sounds good, right? More precision is better. So we made that change.
\ That broke a handful of tests inside Google that were schematically like this one:
func TestSaveTime(t *testing.T) {
t1 := time.Now()
save(t1)
if t2 := load(); t2 != t1 {
t.Fatalf("load() = %v, want %v", t1, t2)
}
}
\
This code calls time.Now and then round-trips the result through save and load and expects to get the same time back. If save and load use a representation that only stores microsecond precision, that will work fine in Go 1 but fail in Go 1.1.
\
To help fix tests like this, we added Round and Truncate methods to discard unwanted precision, and in the release notes, we documented the possible problem and the new methods to help fix it.
\ These examples show how testing finds different kinds of incompatibility than the API checks do. Of course, testing is not a complete guarantee of compatibility either, but it’s more complete than just API checks. There are many examples of problems we’ve found while testing that we decided did break the compatibility rules and rolled back before the release. The time precision change is an interesting example of something that broke programs but that we released anyway. We made the change because the improved precision was better and was allowed within the documented behavior of the function.
\ This example shows that sometimes, despite significant effort and attention, there are times when changing Go means breaking Go programs. The changes are, strictly speaking, “compatible” in the sense of the Go 1 document, but they still break programs. Most of these compatibility issues can be placed in one of three categories: output changes, input changes, and protocol changes.
An output change happens when a function gives a different output than it used to, but the new output is just as correct as, or even more correct than, the old output. If existing code is written to expect only the old output, it will break. We just saw an example of this, with time.Now adding nanosecond precision.
\ Sort. Another example happened in Go 1.6, when we changed the implementation of sort to run about 10% faster. Here’s an example program that sorts a list of colors by length of name:
colors := strings.Fields(
`black white red orange yellow green blue indigo violet`)
sort.Sort(ByLen(colors))
fmt.Println(colors)
Go 1.5: [red blue green white black yellow orange indigo violet]
Go 1.6: [red blue white green black orange yellow indigo violet]
\ Changing sort algorithms often changes how equal elements are ordered, and that happened here. Go 1.5 returned green, white, black, in that order. Go 1.6 returned white, green, black.
\ Sort is clearly allowed to return equal results in any order it likes, and this change made it 10% faster, which is great. But programs that expect a specific output will break. This is a good example of why compatibility is so hard. We don’t want to break programs, but we also don’t want to be locked in to undocumented implementation details.
\
Compress/flate. As another example, in Go 1.8 we improved compress/flate to produce smaller outputs, with roughly the same CPU and memory overheads. That sounds like a win-win, but it broke a project inside Google that needed reproducible archive builds: now they couldn’t reproduce their old archives. They forked compress/flate and compress/gzip to keep a copy of the old algorithm.
\
We do a similar thing with the Go compiler, using a fork of the sort package (and others) so that the compiler produces the same results even when it is built using earlier versions of Go.
\ For output change incompatibilities like these, the best answer is to write programs and tests that accept any valid output, and to use these kinds of breakages as an opportunity to change your testing strategy, not just update the expected answers. If you need truly reproducible outputs, the next best answer is to fork the code to insulate yourself from changes, but remember that you’re also insulating yourself from bug fixes.
An input change happens when a function changes which inputs it accepts or how it processes them.
\
ParseInt. For example, Go 1.13 added support for underscores in large numbers for readability. Along with the language change, we made strconv.ParseInt accept the new syntax. This change didn’t break anything inside Google, but much later we heard from an external user whose code did break. Their program used numbers separated by underscores as a data format. It tried ParseInt first and only fell back to checking for underscores if ParseInt failed. When ParseInt stopped failing, the underscore-handling code stopped running.
\
ParseIP. As another example, Go’s net.ParseIP, followed the examples in early IP RFCs, which often showed decimal IP addresses with leading zeros. It read the IP address 18.032.4.011 address as 18.32.4.11, just with a few extra zeros. We found out much later that BSD-derived C libraries interpret leading zeros in IP addresses as starting an octal number: in those libraries, 18.032.4.011 means 18.26.4.9!
\
This was a serious mismatch between Go and the rest of the world, but changing the meaning of leading zeros from one Go release to the next would be a serious mismatch too. It would be a huge incompatibility. In the end, we decided to change net.ParseIP in Go 1.17 to reject leading zeros entirely. This stricter parsing ensures that when Go and C both parse an IP address successfully, or when old and new Go versions do, they all agree about what it means.
\
This change didn’t break anything inside Google, but the Kubernetes team was concerned about saved configurations that might have parsed before but would stop parsing with Go 1.17. Addresses with leading zeros probably should be removed from those configs, since Go interprets them differently from essentially every other language, but that should happen on Kubernetes’s timeline, not Go’s. To avoid the semantic change, Kubernetes started using its own forked copy of the original net.ParseIP.
\ The best response to input changes is to process user input by first validating the syntax you want to accept before parsing the values, but sometimes you need to fork the code instead.
The final common kind of incompatibility is protocol changes. A protocol change is a change made to a package that ends up externally visible in the protocols a program uses to communicate with the external world. Almost any change can become externally visible in certain programs, as we saw with ParseInt and ParseIP, but protocol changes are externally visible in essentially all programs.
\ HTTP/2. A clear example of a protocol change is when Go 1.6 added automatic support for HTTP/2. Suppose a Go 1.5 client is connecting to an HTTP/2-capable server over a network with middleboxes that happen to break HTTP/2. Since Go 1.5 only uses HTTP/1.1, the program works fine. But then updating to Go 1.6 breaks the program, because Go 1.6 starts using HTTP/2, and in this context, HTTP/2 doesn’t work.
\ Go aims to support modern protocols by default, but this example shows that enabling HTTP/2 can break programs through no fault of their own (nor any fault of Go’s). Developers in this situation could go back to using Go 1.5, but that’s not very satisfying. Instead, Go 1.6 documented the change in the release notes and made it straightforward to disable HTTP/2.
\
In fact, Go 1.6 documented two ways to disable HTTP/2: configure the TLSNextProto field explicitly using the package API, or set the GODEBUG environment variable:
GODEBUG=http2client=0 ./myprog
GODEBUG=http2server=0 ./myprog
GODEBUG=http2client=0,http2server=0 ./myprog
\ As we’ll see later, Go 1.21 generalizes this GODEBUG mechanism to make it a standard for all potentially breaking changes.
\ SHA1. Here’s a subtler example of a protocol change. No one should be using SHA1-based certificates for HTTPS anymore. Certificate authorities stopped issuing them in 2015, and all the major browsers stopped accepting them in 2017. In early 2020, Go 1.18 disabled support for them by default, with a GODEBUG setting to override that change. We also announced our intent to remove the GODEBUG setting in Go 1.19.
\
The Kubernetes team let us know that some installations still use private SHA1 certificates. Putting aside the security questions, it’s not Kubernetes’s place to force these enterprises to upgrade their certificate infrastructure, and it would be extremely painful to fork crypto/tls and net/http to keep SHA1 support. Instead, we agreed to keep the override in place longer than we had planned, to create more time for an orderly transition. After all, we want to break as few programs as possible.
To improve backwards compatibility even in these subtle cases we’ve been examining, Go 1.21 expands and formalizes the use of GODEBUG.
\ To begin with, for any change that is permitted by Go 1 compatibility but still might break existing programs, we do all the work we just saw to understand potential compatibility problems, and we engineer the change to keep as many existing programs working as possible. For the remaining programs, the new approach is:
http2client and http2server, will be maintained much longer, even indefinitely.runtime/metrics counter named /godebug/non-default-behavior/<name>:events that counts the number of times a particular program’s behavior has changed based on a non-default value for that setting. For example, when GODEBUG=http2client=0 is set, /godebug/non-default-behavior/http2client:events counts the number of HTTP transports that the program has configured without HTTP/2 support.go.mod file. If your program’s go.mod file says go 1.20 and you update to a Go 1.21 toolchain, any GODEBUG-controlled behaviors changed in Go 1.21 will retain their old Go 1.20 behavior until you change the go.mod to say go 1.21.//go:debug lines in package main.\ This approach means that each new version of Go should be the best possible implementation of older versions of Go, even preserving behaviors that were changed in compatible-but-breaking ways in later releases when compiling old code.
\
For example, in Go 1.21, panic(nil) now causes a (non-nil) runtime panic, so that the result of recover now reliably reports whether the current goroutine is panicking. This new behavior is controlled by a GODEBUG setting and therefore dependent on the main package’s go.mod’s go line: if it says go 1.20 or earlier, panic(nil) is still allowed. If it says go 1.21 or later, panic(nil) turns into a panic with a runtime.PanicNilError. And the version-based default can be overridden explicitly by adding a line like this to package main:
//go:debug panicnil=1
\ This combination of features means that programs can update to newer toolchains while preserving the behaviors of the earlier toolchains they used, can apply finer-grained control over specific settings as needed, and can use production monitoring to understand which jobs make use of these non-default behaviors in practice. Combined, those should make rolling out new toolchains even smoother than in the past.
\ See “Go, Backwards Compatibility, and GODEBUG” for more details.
In the quoted text from “Go 1 and the Future of Go Programs” at the top of this post, the ellipsis hid the following qualifier:
At some indefinite point, a Go 2 specification may arise, but until that time, [… all the compatibility details …].
\ That raises an obvious question: when should we expect the Go 2 specification that breaks old Go 1 programs?
\ The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.
\ There will not be a Go 2 that breaks Go 1 programs. Instead, we are going to double down on compatibility, which is far more valuable than any possible break with the past. In fact, we believe that prioritizing compatibility was the most important design decision we made for Go 1.
\ So what you will see over the next few years is plenty of new, exciting work, but done in a careful, compatible way, so that we can keep your upgrades from one toolchain to the next as boring as possible.
Russ Cox
\ This article is available on The Go Blog under a CC BY 4.0 DEED license.
2026-03-28 22:00:46
The Markup, now a part of CalMatters, uses investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up for Klaxon, a newsletter that delivers our stories and tools directly to your inbox.
\ The Markup won multiple awards of excellence in the Society for News Design’s Best of News Design Creative Competition, which honors excellence in visual storytelling, design, and journalism produced in 2023.
\ “See the Neighborhoods Internet Providers Excluded from Fast Internet,” designed by Joel Eastwood, won an award of excellence in the page design elements category. Address by address, this map shows the drastically different internet speeds offered by AT&T, CenturyLink, or Verizon—all at the same price point per provider—in major U.S. cities. This map, unlike the ones hosted by most popular online mapping services, also protects our readers’ privacy.
\ Illustrations by Gabriel Hongsdusit in “How Your Attention Is Auctioned Off to Advertisers” won an award of excellence in the illustration category. Using illustrations, this story explains how, in mere milliseconds, online advertisers scrutinize your personal data and bid for your eyeballs.
\ Maria Puertas’s Instagram reel explaining ChatGPT’s impact on the environment won an award of excellence in the platform design category.
\ Four illustrations (one, two, three, four) by Victor Bizar Gómez for our Neighborhood Watch series won an award of excellence in the investigative or public service line of coverage category. The series of stories by Lam Thuy Vo found that Ring’s social platform funnels suspicions from residents in Whiter and wealthier areas of Los Angeles directly to police.
\ The United States Place Sampler, by Leon Yin, Aaron Sankin, and Joe Nudell at Big Local News, won an award of excellence in the product design category. This tool makes it easy for anyone to sample random U.S. street addresses.
\ The Markup’s social media storytelling, by Maria Puertas and Gabriel Hongsdusit, won an award of excellence in the organizational portfolio category.
A big congratulations to the entire team for recognition of their hard work. Congratulations too, to all of this year’s SND honorees.
\ Also published here
\ Photo by Med Badr Chemmaoui on Unsplash
\