2026-01-30 05:19:12
Victoria, Seychelles, January 29, 2026 – MEXC, the world's fastest-growing digital asset exchange and a pioneer of true zero-fee trading, launched a limited-time MEXC Earn event to celebrate the listing of USAT, providing users with the opportunity to share 300,000 USAT and earn up to 300% APR.
USAT is Tether's first US-regulated stablecoin, designed to comply with the GENIUS Act, which was signed into law in July 2025. Each USAT token maintains a 1:1 peg with the US dollar, backed entirely by liquid reserves including US dollars and short-term US Treasury bills held by Cantor Fitzgerald.
Since its listing on MEXC, USAT has seen strong user participation and asset inflows. As of January 29, 2026, MEXC wallets hold a total of $7,757,503 in USAT, ranking first on the platform.
The MEXC Earn event runs from January 27, 2026, 14:00 (UTC) through February 26, 2026, 14:00 (UTC). Users can stake USAT to share 300,000 USAT, distributed on a first-come, first-served basis until fully allocated. New users who register via the referral code (mexc-USAT) or exclusive link and complete KYC verification between January 27, 2026, 13:30 (UTC) and February 3, 2026, 13:30 (UTC) can access exclusive APR boosters for USAT or USDT Flexible Savings with up to 300% APR.

As the first exchange to list USAT and provide flexible savings opportunities, MEXC gives users early access to opportunities around USAT.
With key advantages including rapid listing efficiency, over 3,000 listed tokens, zero-fee trading, and comprehensive liquidity, MEXC has become the preferred digital asset trading platform for a growing number of traders. Moving forward, MEXC will continue prioritizing user value by helping users seize early opportunities in emerging digital assets.
To learn more or participate in the USAT flexible savings event, visit the MEXC Earn page.
Founded in 2018, MEXC is committed to being "Your Easiest Way to Crypto." Serving over 40 million users across 170+ countries, MEXC is known for its broad selection of trending tokens, everyday airdrop opportunities, and low trading fees.
Our user-friendly platform is designed to support both new traders and experienced investors, offering secure and efficient access to digital assets. MEXC prioritizes simplicity and innovation, making crypto trading more accessible and rewarding.
MEXC Official Website| X | Telegram |How to Sign Up on MEXC
For media inquiries, please contact MEXC PR team: [email protected]
Risk Disclaimer:
This content does not constitute investment advice. Given the highly volatile nature of the cryptocurrency market, investors are encouraged to carefully assess market fluctuations, project fundamentals, and potential financial risks before making any trading decisions
:::tip This story was published as a press release by Blockman under HackerNoon’s Business Blogging Program.
:::
\n
\
2026-01-30 05:05:56
The world of decentralized finance is moving fast, and one project is currently taking center stage. Mutuum Finance (MUTM) has just announced the completion of two massive roadmap milestones. This news has sent a wave of excitement through the market.
Right now, over 19,000 investors are watching the project with intense focus. The protocol is no longer just a set of ideas on paper. It has officially moved into the execution phase, proving that the team can deliver on its promises.
For those following the DeFi space, these updates are a clear signal. The transition from development to a live environment is often where the most significant value shifts happen.
With the protocol nearing its full launch, the window for early entry is closing rapidly. The momentum is building, and the community is growing larger every single day.
The first major milestone is the activation of the V1 protocol on the Sepolia testnet. This is a huge step forward for the project. It means the core code is now live and functional in a public testing environment. Users can finally interact with the system’s primary tools. This includes the liquidity pools, the mtToken system, and the automated liquidator bot.
By launching on the testnet, Mutuum Finance has proven that its technology works. This is not just a concept anymore; it is a working financial machine. Investors can see how the Peer-to-Contract (P2C) and Peer-to-Peer (P2P) lending markets function.
The V1 launch is the ultimate proof of utility. It shows that the protocol is ready to handle assets like ETH and USDT. This technical success is a primary driver of the massive surge in investor interest we are seeing this week.
The second milestone is just as important: the completion of the independent security audit by Halborn. In the world of crypto, security is everything. Halborn is a world-class firm known for protecting the biggest names in the industry. They have thoroughly reviewed the Mutuum smart contracts to ensure user funds are safe.
In addition to the Halborn audit, the project has earned a high 90/100 security score from CertiK. This double layer of protection is rare for a new project. It gives large-scale investors the confidence they need to commit significant capital.
To keep the system even safer, the team has also launched a $50,000 bug bounty program. This invites the best security experts in the world to test the code. With these security foundations in place, Mutuum Finance is positioning itself as one of the most reliable and transparent DeFi protocols of 2026.
The numbers behind the Mutuum Finance presale are staggering. The project has already raised over $19.95 million in capital. This funding comes from a global community of more than 19,000 holders. This is not a project controlled by a small group of people. It is a massive, decentralized movement with thousands of supporters across the globe.

The MUTM supply is also very clear. There is a total cap of 4 billion MUTM tokens. Out of that, 1.82 billion tokens (45.5%) are dedicated to the presale. This large allocation ensures that the community owns a significant portion of the project.
To date, over 835 million tokens have already been sold. This means that nearly half of the available presale supply has already been claimed. As each phase sells out, the remaining supply becomes more scarce, which is driving even more demand.
The price of the MUTM token has been on a steady upward path since it first started. The presale began in early 2025 at a price of just $0.01. Today, the project is in Phase 7, and the price is $0.04. This represents a 300% increase for the earliest participants. But the growth is not finished yet.
The official launch price is confirmed at $0.06. This means that people joining now at $0.04 are still looking at a significant discount before the token even hits the open market. For those who joined in Phase 1, the launch price represents a 500% MUTM appreciation.
The structure of the presale is designed so that each phase increases the price by nearly 20%. This creates a natural sense of urgency. Every time a phase closes, the cost of entry goes up. Investors are rushing to secure their positions before the next price hike takes effect.
One of the most exciting features of the presale is the 24-hour leaderboard. Every single day, the top daily contributor wins a $500 bonus in MUTM tokens. This has created a competitive and fun environment for the community. It encourages steady participation and ensures that the project stays active every day.
We are also seeing a major increase in whale activity. Large investors are starting to move into the project with single allocations as high as $100,000. These "whales" are professionals who do deep research before they move their money. Their entry into Phase 7 is a massive sign of confidence. They want to secure as many tokens as possible before the $0.06 launch. T
The project has also made it easy for everyone to join by supporting direct card payments. You don't need to be a crypto expert to participate; you can use a credit or debit card to secure your tokens in seconds.
The combination of utility, high security, and a massive community makes MUTM a standout performer. The project is also planning to launch a native stablecoin and move to Layer-2 networks to make transactions faster and cheaper. All of these catalysts are lining up to make 2026 a huge year for the protocol. The window to join at the $0.04 price is closing fast.
For more information about Mutuum Finance (MUTM) visit the links below:
Website: https://www.mutuum.com
:::tip This story was published as a press release by Btcwire under HackerNoon’s Business Blogging Program
:::
\
2026-01-30 02:00:02
2 INTERACTIVE WORLD SIMULATION
3.1 DATA COLLECTION VIA AGENT PLAY
3.2 TRAINING THE GENERATIVE DIFFUSION MODEL
4.1 AGENT TRAINING
4.2 GENERATIVE MODEL TRAINING
5.1 SIMULATION QUALITY
5.2 ABLATIONS
7 DISCUSSION, ACKNOWLEDGEMENTS AND REFERENCES
\
We introduced GameNGen, and demonstrated that high-quality real-time game play at 20 frames per second is possible on a neural model. We also provided a recipe for converting an interactive piece of software such as a computer game into a neural model. Limitations. GameNGen suffers from a limited amount of memory. The model only has access to a little over 3 seconds of history, so it’s remarkable that much of the game logic is persisted for drastically longer time horizons. While some of the game state is persisted through screen pixels (e.g. ammo and health tallies, available weapons, etc.), the model likely learns strong heuristics that allow meaningful generalizations. For example, from the rendered view the model learns to infer the player’s location, and from the ammo and health tallies, the model might infer whether the player has already been through an area and defeated the enemies there. That said, it’s easy to create situations where this context length is not enough. Continuing to increase the context size with our existing architecture yields only marginal benefits (Section 5.2.1), and the model’s short context length remains an important limitation. The second important limitation are the remaining differences between the agent’s behavior and those of human players. For example, our agent, even at the end of training, still does not explore all of the game’s locations and interactions, leading to erroneous behavior in those cases.
\
We demonstrate GameNGen on the classic game DOOM. It would be interesting to test it on directions yet and much more work is required here, but we are excited to try! Hopefully this small step will someday contribute to a meaningful improvement in people’s experience with video games, or maybe even more generally, in day-to-day interactions with interactive software systems. ACKNOWLEDGEMENTS We’d like to extend a huge thank you to Eyal Segalis, Eyal Molad, Matan Kalman, Nataniel Ruiz, Amir Hertz, Matan Cohen, Yossi Matias, Yael Pritch, Danny Lumen, Valerie Nygaard, the Theta Labs and Google Research teams, and our families for insightful feedback, ideas, suggestions, and support.other games or more generally on other interactive software systems; We note that nothing in our technique is DOOM specific except for the reward function for the RL-agent. We plan on addressing that in a future work; While GameNGen manages to maintain game state accurately, it isn’t perfect, as per the discussion above. A more sophisticated architecture might be needed to mitigate these; GameNGen currently has a limited capability to leverage more than a minimal amount of memory. Experimenting with further expanding the memory effectively could be critical for more complex games/software; GameNGen runs at 20 or 50 FPS2 on a TPUv5. It would be interesting to experiment with further optimization techniques to get it to run at higher frame rates and on consumer hardware.
\
Today, video games are programmed by humans. GameNGen is a proof-of-concept for one part of a new paradigm where games are weights of a neural model, not lines of code. GameNGen shows that an architecture and model weights exist such that a neural model can effectively run a complex game (DOOM) interactively on existing hardware. While many important questions remain, we are hopeful that this paradigm could have important benefits. For example, the development process for video games under this new paradigm might be less costly and more accessible, whereby games could be developed and edited via textual descriptions or examples images. A small part of this vision, namely creating modifications or novel behaviors for existing games, might be achievable in the shorter term. For example, we might be able to convert a set of frames into a new playable level or create a new character just based on example images, without having to author code. Other advantages of this new paradigm include strong guarantees on frame rates and memory footprints. We have not experimented with these directions yet and much more work is required here, but we are excited to try! Hopefully this small step will someday contribute to a meaningful improvement in people’s experience with video games, or maybe even more generally, in day-to-day interactions with interactive software systems.
\ ACKNOWLEDGEMENTS
We’d like to extend a huge thank you to Eyal Segalis, Eyal Molad, Matan Kalman, Nataniel Ruiz, Amir Hertz, Matan Cohen, Yossi Matias, Yael Pritch, Danny Lumen, Valerie Nygaard, the Theta Labs and Google Research teams, and our families for insightful feedback, ideas, suggestions, and support.
\ CONTRIBUTION
• Dani Valevski: Developed much of the codebase, tuned parameters and details across the system, added autoencoder fine-tuning, agent training, and distillation.
• Yaniv Leviathan: Proposed project, method, and architecture, developed the initial implementation, key contributor to implementation and writing.
• Moab Arar: Led auto-regressive stabilization with noise-augmentation, many of the ablations, and created the dataset of human-play data. • Shlomi Fruchter: Proposed project, method, and architecture. Project leadership, initial implementation using DOOM, main manuscript writing, evaluation metrics, random policy data pipeline.
\
Tomas Akenine-Mller, Eric Haines, and Naty Hoffman. Real-Time Rendering, Fourth Edition. A. K. Peters, Ltd., USA, 4th edition, 2018. ISBN 0134997832. Eloi Alonso, Adam Jelley, Vincent Micheli, Anssi Kanervisto, Amos Storkey, Tim Pearce, and Franc¸ois Fleuret. Diffusion for world modeling: Visual details matter in atari, 2024. Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, Yuanzhen Li, Michael Rubinstein, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel, and Inbar Mosseri. Lumiere: A space-time diffusion model for video generation, 2024. URL https://arxiv.org/abs/2401.12945. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, and Robin Rombach. Stable video diffusion: Scaling latent video diffusion models to large datasets, 2023a. URL https://arxiv.org/abs/2311.15127. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models, 2023b. URL https://arxiv.org/abs/2304.08818. Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators, 2024. URL https://openai.com/research/ video-generation-models-as-world-simulators. Jake Bruce, Michael Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Bechtle, Feryal Behbahani, Stephanie Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando de Freitas, Satinder Singh, and Tim Rocktaschel. Genie: Generative interactive environments, 2024. URL ¨ https://arxiv.org/abs/2402.15391. Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar Shah, Xi Yin, Devi Parikh, and Ishan Misra. Emu video: Factorizing text-to-video generation by explicit image conditioning, 2023. URL https://arxiv.org/abs/2311. 10709. Agrim Gupta, Lijun Yu, Kihyuk Sohn, Xiuye Gu, Meera Hahn, Li Fei-Fei, Irfan Essa, Lu Jiang, and Jose Lezama. Photorealistic video generation with diffusion models, 2023. URL ´ https: //arxiv.org/abs/2312.06662. David Ha and Jurgen Schmidhuber. World models, 2018. ¨ Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination, 2020. URL https://arxiv.org/abs/1912.01603. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance, 2022. URL https://arxiv. org/abs/2207.12598. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. arXiv preprint arXiv:2106.15282, 2021. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey A. Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models. ArXiv, abs/2210.02303, 2022. URL https://api.semanticscholar.org/CorpusID:252715883. Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splat- ¨ ting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4), July 2023. URL https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/. Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, and Sanja Fidler. Learning to Simulate Dynamic Environments with GameGAN. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2020. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Willi Menapace, Stephane Lathuili ´ ere, Sergey Tulyakov, Aliaksandr Siarohin, and Elisa Ricci. ` Playable video generation, 2021. URL https://arxiv.org/abs/2101.12195. Willi Menapace, Aliaksandr Siarohin, Stephane Lathuili ´ ere, Panos Achlioptas, Vladislav Golyanik, ` Sergey Tulyakov, and Elisa Ricci. Promptable game models: Text-guided game simulation via masked diffusion models. ACM Transactions on Graphics, 43(2):1–16, January 2024. ISSN 1557-7368. doi: 10.1145/3635705. URL http://dx.doi.org/10.1145/3635705. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Kirkeby Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015. URL https://api.semanticscholar.org/ CorpusID:205242740. Danko Petric and Marija Milinkovic. Comparison between cs and jpeg in terms of image compression, 2018. URL https://arxiv.org/abs/1802.05114. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Muller, Joe ¨ Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/ 20-1364.html. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High- ¨ resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684–10695, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a. URL https://openreview.net/forum? id=TIdIXIpzhoI. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models, 2022b. URL https://arxiv.org/abs/2202.00512. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/ 1707.06347. Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235, 2018. URL http://arxiv.org/abs/1804.04235. P. Shirley and R.K. Morley. Realistic Ray Tracing, Second Edition. Taylor & Francis, 2008. ISBN 9781568814612. URL https://books.google.ch/books?id=knpN6mnhJ8QC. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, October 2020. URL https://arxiv.org/abs/2010.02502. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models, 2022. URL https://arxiv.org/abs/2010.02502. Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, ¨ and Sylvain Gelly. FVD: A new metric for video generation. In Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop, New Orleans, Louisiana, United States, May 6, 2019, 2019. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. arXiv preprint arXiv:2305.16213, 2023. Marek Wydmuch, Michał Kempka, and Wojciech Jaskowski. ViZDoom Competitions: Playing ´ Doom from Pixels. IEEE Transactions on Games, 11(3):248–259, 2019. doi: 10.1109/TG.2018. 2877047. The 2022 IEEE Transactions on Games Outstanding Paper Award. Mengjiao Yang, Yilun Du, Kamyar Ghasemipour, Jonathan Tompson, Dale Schuurmans, and Pieter Abbeel. Learning interactive real-world simulators. arXiv preprint arXiv:2310.06114, 2023. Tianwei Yin, Michael Gharbi, Richard Zhang, Eli Shechtman, Fr ¨ edo Durand, William T Freeman, ´ and Taesung Park. One-step diffusion with distribution matching distillation. In CVPR, 2024. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
:::info Authors:
:::
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\
2026-01-30 01:00:04
2 INTERACTIVE WORLD SIMULATION
3.1 DATA COLLECTION VIA AGENT PLAY
3.2 TRAINING THE GENERATIVE DIFFUSION MODEL
4.1 AGENT TRAINING
4.2 GENERATIVE MODEL TRAINING
5.1 SIMULATION QUALITY
5.2 ABLATIONS
7 DISCUSSION, ACKNOWLEDGEMENTS AND REFERENCES
\
Interactive 3D Simulation Simulating visual and physical processes of 2D and 3D environments and allowing interactive exploration of them is an extensively developed field in computer graphics (Akenine-Mller et al., 2018). Game Engines, such as Unreal and Unity, are software that processes
representations of scene geometry and renders a stream of images in response to user interactions. The game engine is responsible for keeping track of all world state, e.g. the player position and movement, objects, character animation and lighting. It also tracks the game logic, e.g. points gained by accomplishing game objectives. Film and television productions use variants of raytracing (Shirley & Morley, 2008), which are too slow and compute-intensive for real time applications. In contrast, game engines must keep a very high frame rate (typically 30-60 FPS), and therefore rely on highly-optimized polygon rasterization, often accelerated by GPUs. Physical effects such as shadows, particles and lighting are often implemented using efficient heuristics rather than physically accurate simulation.
Neural 3D Simulation Neural methods for reconstructing 3D representations have made significant advances over the last years. NeRFs (Mildenhall et al., 2020) parameterize radiance fields using a deep neural network that is specifically optimized for a given scene from a set of images taken from various camera poses. Once trained, novel point of views of the scene can be sampled using volume rendering methods. Gaussian Splatting (Kerbl et al., 2023) approaches build on NeRFs but represent scenes using 3D Gaussians and adapted rasterization methods, unlocking faster training and rendering times. While demonstrating impressive reconstruction results and real-time interactivity, these methods are often limited to static scenes.
Video Diffusion Models Diffusion models achieved state-of-the-art results in text-to-image generation (Saharia et al., 2022; Rombach et al., 2022; Ramesh et al., 2022; Podell et al., 2023), a line of work that has also been applied for text-to-video generation tasks (Ho et al., 2022; Blattmann et al., 2023b;a; Gupta et al., 2023; Girdhar et al., 2023; Bar-Tal et al., 2024). Despite impressive advancement in realism, text adherence and temporal consistency, video diffusion models remain too slow for real-time applications. Our work extends this line of work and adapts it for real-time generation conditioned auto regressively on a history of past observations and actions.
\ Game Simulation and World Models Several works attempted to train models for game simulation with actions inputs. Yang et al. (2023) build a diverse dataset of real-world and simulated videos and train a diffusion model to predict a continuation video given a previous video segment and a textual description of an action. Menapace et al. (2021) and Bruce et al. (2024) focus on unsupervised learning of actions from videos. Menapace et al. (2024) converts textual prompts to game states, which are later converted to a 3D representation using NeRF. Unlike these works, we focus on interactive playable real-time simulation, and demonstrate robustness over long-horizon trajectories. We leverage an RL agent to explore the game environment and create rollouts of observations and interactions for training our interactive game model.
\ Another line of work explored learning a predictive model of the environment and using it for training an RL agent. Ha & Schmidhuber (2018) train a Variational Auto-Encoder (Kingma & Welling, 2014) to encode game frames into a latent vector, and then use an RNN to mimic the VizDoom game environment, training on random rollouts from a random policy (i.e. selecting an action at random). Then controller policy is learned by playing within the “hallucinated” environment. Hafner et al. (2020) demonstrate that an RL agent can be trained entirely on episodes generated by a learned world model in latent space. Also close to our work is Kim et al. (2020), that use an LSTM architecture for modeling the world state, coupled with a convolutional decoder for producing output frames and jointly trained under an adversarial objective.
\ While this approach seems to produce reasonable results for simple games like PacMan, it struggles with simulating the complex environment of VizDoom and produces blurry samples. In contrast, GameNGen is able to generate samples comparable to those of the original game, see Figure 2. Finally, concurrently with our work, Alonso et al. (2024) train a diffusion world model to predict the next observation given observation history, and iteratively train the world model and an RL model on Atari games.
\ DOOM When DOOM released in 1993 it revolutionized the gaming industry. Introducing groundbreaking 3D graphics technology, it became a cornerstone of the first-person shooter genre, influencing countless other games. DOOM was studied by numerous research works. It provides an open-source implementation and a native resolution that is low enough for small sized models to simulate, while being complex enough to be a challenging test case. Finally, the authors have spent countless youth hours with the game. It was a trivial choice to use it in this work.
:::info Authors:
:::
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\
2026-01-30 00:02:17
How are you, hacker?
🪐 What’s happening in tech today, January 29, 2026?
The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, The Birth of Kansas in 1861, Edgar Allan Poe's "The Raven" is Published in 1845, First Inductees of the Baseball Hall of Fame in 1936, and we present you with these top quality stories. From Humanitys Last Game Of Musical Chairs Has Begun to Why Google Calendar Sync Is Hard (and What Tokens Have to Do With It), let’s dive right in.

By @rhortx [ 6 Min read ] Even if AGI isnt feasible, the gains being made right now will drastically disorient the workforce. Read More.

By @anywhichway [ 14 Min read ] Explore the differences between HTMX and Lightview hypermedia. Learn how to choose between pure HDA architecture and Lightview’s multiple paradigms. Read More.

By @tylerdane [ 8 Min read ] What looks like a simple API integration can take weeks to implement properly. Read More.

By @kamilaselig [ 6 Min read ] The enterprise case for AI is ROI; the human case is meaning. Read More.

By @hacker68060072 [ 6 Min read ] From scattered AI pilots to strategic systems: why orchestration, observability, and auditability are the new competitive edge for enterprise AI adoption. Read More.
🧑💻 What happened in your world this week?
It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️

2026-01-29 23:51:27
In the fast-paced world of mobile application development, where millions of users depend on seamless digital experiences during their most critical moments, exceptional engineering makes the difference between frustration and confidence. Venkata Kalyan Pasupuleti's eight-year journey as a Senior iOS Engineer exemplifies how technical excellence, combined with a deep sense of user empathy, transforms complex requirements into mobile applications that people trust when it matters most.
Venkata Kalyan's entry into software engineering was driven by a fundamental curiosity about how technology works and its potential to simplify everyday life. What began as an interest in turning ideas into working applications evolved into something far more meaningful as he witnessed the real-world impact of his code. Building mobile apps used by real people, particularly in time-critical scenarios like travel, transformed his work from technical exercise into purposeful mission. This realization—that his engineering efforts directly impact users at scale—continues to fuel his commitment to growth and excellence.
With eight years of specialized experience in iOS development, Venkata Kalyan has established himself as an architect of sophisticated mobile applications serving millions of users. His technical proficiency spans the modern iOS ecosystem, including Swift, SwiftUI, and advanced iOS frameworks that power today's most demanding applications. His expertise encompasses enterprise-grade architectural patterns, enabling him to design solutions that balance scalability with maintainability.
Beyond architectural knowledge, Venkata Kalyan brings comprehensive expertise in testing methodologies and performance optimization techniques that deliver measurable improvements in application reliability and user satisfaction. His mastery of modern development practices, combined with deep experience in collaborative workflows, positions him uniquely to translate complex business requirements into elegant, scalable mobile solutions that drive meaningful user engagement and tangible business outcomes.
The pinnacle of Venkata Kalyan's professional achievements centers on his contributions to a major airline's customer-facing iOS application—a platform where reliability isn't optional but essential. His work focused specifically on the day-of-travel experience that travelers depend on during their journeys. This feature enables users to check in, manage boarding passes, add bags, and receive real-time flight updates—functionality that must work flawlessly when users need it most.
As an iOS developer on this high-stakes project, Venkata Kalyan concentrated on performance, reliability, and scalable architecture. The magnitude of impact is staggering: the feature supports over five million monthly users, with demand intensifying during peak holiday travel periods when system reliability becomes paramount. This project became a crucible for professional growth, deepening his understanding of production-scale mobile architecture and high-availability systems design.
The significance of his contributions earned recognition from senior leadership for delivering high-impact iOS features used in production by millions of users. This recognition validates not just technical competence but the ability to deliver under pressure on systems where failure is not an option.
Venkata Kalyan's approach to professional development reflects a commitment to continuous learning that extends beyond formal training. He regularly engages with industry thought leadership through technical blogs and community discussions, gaining real-world iOS engineering insights. His learning regimen includes official platform updates, developer conference sessions, and extensive reading on Swift, SwiftUI, mobile architecture, and performance optimization. Critically, he maintains current through active work on production systems—where theoretical knowledge meets practical application.
Beyond technical content, Venkata Kalyan explores product design and system architecture, recognizing that exceptional engineering requires understanding the broader context in which solutions operate. His side projects and experimentation with emerging technologies maintain the curiosity that first drew him to software engineering while sharpening his problem-solving capabilities.
At the core of Venkata Kalyan's professional identity lie three fundamental values: ownership, reliability, and continuous learning. He takes personal responsibility for the quality and impact of his work, particularly on systems that people depend on during critical moments. This ownership mentality extends beyond writing code to ensuring that solutions perform reliably when users need them most. His commitment to continuous improvement ensures that his skills evolve alongside rapidly changing technology.
Looking forward, Venkata Kalyan's aspirations reflect both technical ambition and leadership vision. He aims to continue building scalable, high-impact mobile platforms that solve real-world problems and serve users at scale. His goal extends beyond individual contribution toward technical leadership roles where he can influence architecture, optimize performance, and mentor emerging engineers. His vision centers on helping teams build reliable products that users can trust in critical moments—a mission statement that connects technical excellence with human impact.
Venkata Kalyan Pasupuleti is a Senior iOS Engineer with eight years of experience architecting enterprise-grade mobile applications that serve millions of users. Specializing in Swift, SwiftUI, and modern iOS frameworks, he combines deep technical expertise in architectural patterns, testing methodologies, and performance optimization with a values-driven approach centered on ownership, reliability, and continuous learning. His work on mission-critical features for a major airline's iOS application demonstrates his ability to deliver high-availability systems at scale, earning recognition from senior leadership for contributions that impact millions of travelers during their most time-sensitive moments.
\
:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.
:::
\