MoreRSS

site iconEngadgetModify

Founded in 2004, is a popular tech blog covering news, reviews, and guides on everything from gadgets to AI and electric vehicles.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Engadget

Playdate Season 3 is coming later this year

2026-04-17 02:13:40

Playdate is getting a third season of curated, surprise games, Panic announced today. We don't know much beyond the fact that Season Three is officially happening, but Panic's Head of Playdate Greg Maletic said in an announcement video that it will be here "in time for the holidays" this year. Considering we had to wait a whole three years for Season Two to come out following Season One's release with the console in 2022, that doesn't sound so bad.

Panic hasn't yet said how many games Season Three will include, or how much it will cost. While Season One had a total of 24 games — with a release schedule of two games per week for 12 weeks — last year’s Season Two had half the amount (plus Blippo+), and cost $39. But that drop in quantity thankfully didn't mean a drop in quality. Season Two was great, with a collection of games that felt stronger overall than the first. I, for one, can't wait to see what Season Three brings. In other exciting news, Panic also announced today that the much, much-awaited game Office Chair Curling is finally available for purchase on Playdate and Steam, with the option for online cross-play.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playdate-season-3-is-coming-later-this-year-181340117.html?src=rss

A first look at Metro 2039 shows how its Ukrainian developer turned the darkness up to 11

2026-04-17 01:15:00

If the real world isn’t grim enough for you, Ukranian developer 4A Games has your back: Metro 2039 has been announced and is scheduled to arrive this winter. And based on the developer’s first look at the title, Metro 2039 looks to be an even darker affair than previous titles in the series. A tall order, but the real-world turmoil that has enveloped 4A Games since Russia’s invasion of Ukraine sounds like it has turned into a painful inspiration for the developer.

The lengthy cinematic reveal, which also contains a brief bit of gameplay at the end, doesn’t give much of the story away. But it does serve to place you right in the ruined, terrifying world of the Metro series. Metro 2039 arrives about 25 years after a nuclear apocalypse wiped out most life on the planet. The series focuses on survivors who live in Moscow’s ruined metro system. 4A says that this time out, the different underground factions have been united by a group known as “the Novoreich,” complete with a new ruler, the Spartan known as Hunter.

Despite Hunter promising “salvation and a new life” for the survivors left on the surface, things aren’t exactly rosy underground. As you might expect, this supposedly “united” society is still a complete disaster, with propaganda, authoritarian rule and violence the hallmark of the regime.

Screenshot from Metro 2039.
Screenshot from Metro 2039.
4A Games

The Metro series is based on novels by Dmitry Glukhovsky, a Russian author who has been in exile due to his public denouncement of Russia’s invasion of Ukraine. 4A Studios says that while this new game isn’t based specifically on one of his works, they worked in collaboration with Glukhovsky on the story for Metro 2039 “shaped by shared values of freedom and truth, and informed by the harsh realities of the world today.”

In statements from the studio, 4A directly acknowledges the conditions that Metro 2039 was created under. “Many developers continue to work from multiple locations, facing daily challenges never anticipated,” the studio says. “Through power outages, reliance on generators, and disruptions from missile and drone attacks, development has continued – driven by resilience, shared support, and a commitment to the work.”

It goes on to state that: “The war has directly shaped the development of Metro 2039, with its story focused acutely on choices, actions, consequences, and the cost of securing a future. While told from a distinctly Ukrainian perspective, Metro 2039 remains an authentic Metro story.” While the Metro series has been unfailingly bleak, it’s not hard to imagine how Russia’s invasion could have influenced the storytelling coming out of a Ukranian studio with an exiled Russian being part of the story team. But the limited bit of the game we’ve seen so far doesn’t make anything too explicit.

Screenshot from Metro 2039's reveal trailer.
Screenshot from Metro 2039's reveal trailer.
4A Games

The trailer shows off the new player-character known as The Stranger, the first voiced protagonist in the series (though we don’t hear him do anything but scream in the preview). The Stranger has apparently been surviving in the above-ground wasteland but is forced to return to the metro. The little bit of gameplay we saw was the standard first-person shooter view of The Stranger heading underground to be immediately ambushed by a pretty horrific monster that he barely escapes from — he’s then dragged to “safety” by a group of survivors who just get the doors to their shelter shut before being overrun by a larger horde. Creepy stuff.

The rest of the preview largely feels like a dream (or nightmare) sequence — but while it’s hard to put together what is going on, there’s no doubt that the detail in the environments and characters is top-notch. Given that the last metro game, Metro Exodus, was released way back in 2019, it’s fair to say that we’re getting a more graphically impressive rendering of ruined Moscow and the tunnels beneath it.

There’s no exact release date yet, but 4A Games says Metro 2039 will arrive this winter for Xbox Series X/S, PlayStation 5 and PC.

This article originally appeared on Engadget at https://www.engadget.com/gaming/a-first-look-at-metro-2039-shows-how-its-ukrainian-developer-turned-the-darkness-up-to-11-171500713.html?src=rss

Google Chrome makes it easier to wrangle different tabs in AI Mode

2026-04-17 01:00:00

Love 'em or hate 'em, no modern browser is complete without robust tab support, and so too would it seem Google's AI Mode. Starting today, the company is rolling out an update to users in the US that makes the tool better at interacting and understanding tabs. 

To start, the next time you use AI Mode on Chrome for desktop and click on a link, the chatbot will open a new side-by-side interface that allows you to both browse the new webpage and ask questions of AI Mode. The connection allows the chatbot to maintain the context of the search that brought you to that website in the first place. 

For instance, say you're looking for a new coffee maker to buy for your apartment. After AI Mode finds a handful of different models for you to compare, you can click on one to go to the manufacturer's website and ask additional questions of the chatbot like "how easy is this to clean?" Thanks to the expanded context window, you don't need to refer to the specific name of the model.   

Meanwhile, if you have an existing tab or group of tabs that you'd like AI Mode to factor into a new search, you can do that now too. From the redesigned Plus menu, just click the new option that's there. While you're in the Plus menu, you can also prompt AI Mode to consider other materials, including images and PDFs, alongside any relevant tabs.   

In testing, Google says users found the integration translated to less tab switching, and made it easier to focus. Mike Torres, vice-president of product for Chrome, said the new features represent a broader effort by Google to bring practical AI capabilities to its web browser. Torres added the company would soon bring today's updates to more places around the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-chrome-makes-it-easier-to-wrangle-different-tabs-in-ai-mode-170000914.html?src=rss

OpenAI's latest Codex update builds the groundwork for its upcoming super app

2026-04-17 01:00:00

Last month, following reporting from The Wall Street Journal, OpenAI confirmed it was working on a desktop super app that would combine ChatGPT, its Codex coding agent and Atlas web browser into one cohesive experience. OpenAI is not releasing that application today. Instead, it's pushing out a major update to Codex that significantly expands what that software can do. However, the new release offers a glimpse of what OpenAI hopes to build with its latest effort.  

"We're building the super app out in the open," said Thibault Sottiaux, the head of Codex, during a press briefing held by OpenAI. "This release is about developers. In the future, we will broaden it up to a wider audience." Until then, the latest version of Codex offers developers multi-purpose AI agents that can work across a "larger surface area," while being more proactive. In practice, that translates to a host of new capabilities, starting with computer use. 

The agents inside of Codex can interact with other apps on your PC. When prompting one of OpenAI's models, you can name a specific program or let it determine the best application for the job. Computer use is available in competing apps like Claude Cowork, but where OpenAI believes Codex offers an edge in that department is in the "secret sauce" it built to allow an agent to run an app without bogging down your entire system, so the two of you can work in tandem. At the same time, OpenAI is releasing 111 new plugins for Codex that combine skills, app integrations and model context protocol server connections to give Codex more ways to gather context and use the tools developers depend on for their work.

The company has also added a built-in browser, with a commenting system that allows you to prompt Codex to make tweaks to specific parts of a webpage or web app you're building. In the demo OpenAI showed, one member of the Codex team used this tool to instruct Codex to change the margins on a graph so that the y axis wasn't cut off. Complementing this is built-in image generation. Codex can use gpt-image-1.5 to create product concepts, mockups, frontend designs and even assets for simple games. It also allows Codex to use screenshots to verify it's on the right track with a user request.   

With today's update, OpenAI is also previewing a pair of memory features. The first allows Codex to recall context from previous tasks to inform how it goes about future prompts. According to OpenAI, with time, this will allow Codex to complete requests faster and to a higher standard. The app will also use the context it's gathered to suggest proactive actions. For example, at the start of your day, it might suggest you respond to a comment a coworker left on a Google Doc draft you wrote. 

If you want to try the updated Codex for yourself, OpenAI is starting to roll out the new version to desktop app users who are logged in with their ChatGPT account. Computer use is available to macOS users first, with availability for people in the EU and UK to follow soon. Similarly, Brits and Europeans will need to wait to try the memory features OpenAI has built into Codex.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-latest-codex-update-builds-the-groundwork-for-its-upcoming-super-app-170000019.html?src=rss

Intel launches new Core Series 3 chips for mainstream laptops

2026-04-17 00:48:21

Intel has unveiled its new Core Series 3 chips, the official title for its Wildcat Lake-codenamed series intended for mainstream and value-oriented laptops. Built using the same Intel 18A process as its Core Ultra Series 3 chips, they’re significantly more powerful than the previous generation and promise "exceptional battery life" and "boosted AI-ready performance."

Intel says the Core Series 3 offers up to 47 percent better single-thread performance and 41 percent better multi-thread performance, as well as 2.8x better GPU AI performance compared to a five-year-old PC. Stacked up against its last-gen Intel Core 7 150U processors, the new mobile chip uses up to 64 percent lower processor power and is capable of 2.7x AI GPU performance. In other words, expect more grunt and improved efficiency.

At the top end of the lineup sits the six-core Intel Core 7 360, which has a P-core Max Turbo frequency of 4.8GHz and NPU TOPS performance of 17. This scales down as you move through the other six-core options, and there’s also a five-core Core 3 processor at the entry level with a more modest GPU.

Intel promises all-day battery life, rated at 12.5 hours in the office and 18.5 hours for streaming from Netflix. As for connectivity, there’s support for Wi-Fi 7, Bluetooth 6 and two Thunderbolt 4 ports. The Core Series 3 chips will be making their way into a variety of laptops throughout 2026, including Acer’s Aspire Go 14, 15 and 16, the ASUS Vivobook 14/15/17 and ExpertBook B5 Flip, B3 G2 and P3 G2. The likes of Dell, Samsung and Lenovo will announce their own Core Series 3 devices in the near future.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/intel-launches-new-core-series-3-chips-for-mainstream-laptops-164821846.html?src=rss

Gemini can now draw on your Google data to personalize the images it generates

2026-04-17 00:00:00

Your Google Photos library could soon influence the kind of images you can generate with Gemini. After letting users personalize the AI assistant's responses with data from Gmail, Search and YouTube, Google says it's bringing that same "Personal Intelligence" to Nano Banana 2 to make it easier for users to create personalized images with the AI model.

The goal is to have the data affiliated with your Google account — your YouTube history, emails, Google Photos, etc. — provide context to Nano Banana 2 so you don't have to. Rather than prompting Gemini's image generation model with information about you or photos of your belongings, a direction to "create a picture of my desert island essentials" should produce an image that includes the things you care about without any extra context. Similarly, if you use labels in Google Photos to identify people or pets, you can tell Gemini to "create a hand-drawn illustration of mom," and it should be able to use Google Photo's labels to find the right reference photo and create an image of the right person.

A gif of someone generating an image with Gemini using Personal Intelligence.
Google

If Gemini creates images that don't look right, you can still send a follow-up prompt to refine the result, or select a new source image from Google Photos with the "+" button. Google says you can also click the "Sources" button to view what images the AI referenced in the first place, or ask it directly for the attribution and sources used for a specific image.

Personalized user data is one of the unique advantages Google has over companies offering competing AI assistants, so expanding Personal Intelligence to an already popular feature like image generation is a natural way to build on that lead. For now, this more personalized version of Nano Banana 2 is available in the Gemini app for eligible AI Pro and AI Ultra subscribers. Google says the feature will come to Gemini in Chrome and other users "soon."

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-can-now-draw-on-your-google-data-to-personalize-the-images-it-generates-160000269.html?src=rss