2025-09-12 22:00:00
I asked an AI agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.
Imagine you had an unlimited budget for individual tutors offering hyper-personalized courses that maximized learners’ productivity and skills development. This summer I previewed this idea—with a ridiculous and solipsistic test.
I asked an AI tutor agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.
I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform, with a prompt to research and impersonate me, then build personalized material based on what I already think. I didn’t tell the large language model (LLM) what to read or do anything else to enhance its capabilities, such as giving it access to learning materials that aren’t publicly available online.
The agent’s course in media and AI was well structured—a term-long, original six-module journey into my own collected works that I had never devised, but admit I would have liked to.
It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigor, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework.
This was apparently fed by my entire multimedia output—books, speeches, articles, press interviews, even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5.
The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of me well-deserved, five-star feedback.
For instance, in a section discussing the ethics of non-player characters (NPCs) in computer games, it asked:
If NPCs are generated by AI, who decides their personalities, backgrounds, or morals? Could this lead to bias or stereotyping?
And:
If an AI NPC can learn and adapt, does it blur the line between character and “entity” [independent actor]?
These are great, philosophical questions, which will probably come to the fore when and if Grand Theft Auto 6 comes out next May. I’m psyched that the agentic me came up with them, even if the real me didn’t.
Agentic me also built on what real me does know. In film, it knew about bog-standard Adobe After Effects, which I had covered (it’s used for creating motion graphics and visual effects). But it added Nuke, a professional tool used to combine and manipulate visual effects in The Avengers, which (I’m embarrassed to say) I had never heard of.
So, where did the agent’s knowledge of me come from? My publisher Routledge did a training data deal with Open AI, which I guess could cover my books on media, AI, and live experience.
Unlike some authors, I’m up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish already out, Korean this month).
That availability has to extend to what is now potentially the most discoverable “language” of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimization: making their work easy for LLMs to find, process, and use—much like search engine optimization, but for AI.
To build on this, I further tested my idea by getting an agent powered by China’s DeepSeek to run a course on my materials. When I found myself less visible in its training corpus, it was hard not to take offense. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant.
When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google’s Gemini 2.5 Pro, I learned hallucinatory biographical details about myself like a role running media company The Runaway Collective.
When I asked Elon Musk’s Grok what my best quote was, it said: “Whatever your question, the answer is AI.” That’s a great line, but Google DeepMind’s Nobel-winning Demis Hassabis said it, not me.
This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: Interactive, analytical, insightful, and personalized. And there is some emerging research around the value. This German-led study found that AI-generated feedback helped to motivate secondary school students and benefited their exam revision.
It won’t be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini, and DeepSeek/Qwen are the sine qua non of Gen Z projects.
But here’s the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors—via agents—for each student, based on individual learning needs.
Where do younger teachers fit in, who don’t have a back catalog to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included.
In AI, two alternate positions are often simultaneously true. AI is both emotionally intelligent and tone deaf. It is both a glorified text predictor and a highly creative partner. It is costing jobs, yet creating them. It is dumbing us down, but also powering us up.
So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A prevailing wisdom is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalisation, challenge and motivation.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post I Got an AI to Impersonate Me and Teach Me My Own Course—Here’s What I Learned About the Future of Education appeared first on SingularityHub.
2025-09-12 03:43:51
An extensive brain map shows how regions collaborate during complex decision-making.
We’re constantly making decisions. If I get the pumpkin spice latte, would it make me happier than my usual black coffee? If I go the scenic route on a trip, would it be worth the extra time?
Past and current experiences affect each decision. By imaging the brain, scientists have long known multiple regions collaborate to pull in memories and integrate them with what we’re seeing, hearing, and thinking when weighing options. But because the resolution is relatively low, we’ve only had a rough sketch of the intricate neural connections involved.
A global collaboration is now digging deeper. In a technological feat, the International Brain Laboratory released a large, dynamic brain map of mice navigating a difficult decision-making task.
Launched in 2017, the group seeks to link brain activity with behavior, one of the holy grails in neuroscience. It’s been an uphill struggle. Prior attempts could only measure small regions, and individual teams used their own behavioral tests, making it difficult to integrate data.
The new collaboration gathered neural electrical recordings in mice from multiple labs across the globe using a standardized procedure. Overall, the scientists used nearly 700 brain implants to record neural activity in 139 mice, capturing the activity of 620,000 neurons across the brain.
“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making. The scale is unprecedented … [which] together represent[s] 95 percent of the mouse brain volume,” said study author Alexandre Pouget at the University of Geneva in a press release.
Despite decades of research, scientists still don’t fully understand how we make up our minds.
Say you’re hiking and encounter a bear. The brain immediately goes into hyper mode: The visual cortex identifies the brown thing as a bear and transmits this to the brain’s emotion centers. The latter activate to produce a sense of fear and ask the memory regions what to do. These calculations direct a motor response—back away, make yourself big, or quick-draw that bear spray.
Multiple neural networks fire up to reach the decision, but scientists are divided on how the system works. One camp thinks the brain could combine memories—say, YouTube videos of how to avoid bears—with the fact you’re seeing a bear in high-level brain regions. This hypothesis predicts memories, or prior information, only inform actions at later stages.
Another camp believes the opposite. Rather than waiting until the last second, all regions of the brain—including early sensory systems—incorporate memories to decide the best response. This process could better spread communication throughout the brain.
Like spies tapping phone lines, the authors of the new study hoped to settle the debate by listening in on the chatter of hundreds of thousands of brain cells.
The effort piggy-backed on an International Brain Laboratory dataset that used 699 Neuropixels, an open-source brain implant, to record the electrical firing of individual neurons in mice. The team strategically placed the devices across nearly 280 brain regions in over a hundred mice. They tried to keep the recordings relatively uniform as they tackled the same task across all collaborating labs.
“The scale is unprecedented as we recorded from over half a million neurons across mice…which together represent 95 percent of the mouse brain volume,” said Pouget.
Every lab taught the critters to perform the same difficult challenge. Each mouse entered an arcade of sorts and was shown a black-and-white grating—think zebra skin—on either the left or right side of a screen. They then had to use their front paws to turn a tiny wheel, moving the image to the center within a minute.
If they succeeded, they got a tasty reward. If they failed, they were blasted with a pop of white noise and a short time-out. Between trials, the mice tried to keep their paws on the wheel as they waited for the next test.
Here’s the crux: The game was rigged. There was an 80 percent chance the grate would appear in one direction, teaching the mice that was the best bet. As the trial went on, the grate slowly faded to the point it was almost impossible to see. The mice then had to decide whether to move it left or right based on what they’d previously learned as a best guess.
Each lab recorded brain signals as the mice made their choices and sent the data to a central database. In all, the consortium isolated nearly 76,000 neuron activity patterns across the brain. The recording sites were then stitched together using two-photon microscopy, a technology that impeccably lines up anatomical regions to maps of electrical activation in brain regions.
“We’d seen how successful large-scale collaborations in physics had been at tackling questions no single lab could answer, and we wanted to try that same approach in neuroscience,” said study author Tom Mrsic-Flogel at University College London. “The brain is the most complex structure we know of in the universe and understanding how it drives behavior requires international collaboration on a scale that matches that complexity.”
Using data from the new brain map, the team realized that decision-making isn’t linear. Instead, multiple brain regions—including so-called “early” sensory ones—contribute to the final choice.
For example, brain regions in mice that process visual information sparked with activity upon seeing the grate. That activity then spread and ramped up in a wave-like pattern towards brain regions associated with emotion. These signals guided them to incorporate previous learning, called priors, into final decisions on what to do—move the wheel left or right.
Before, scientists thought priors were encoded in brain regions related to memory and higher cognition. But the new map suggests their signals also influence the early sensory processing regions that contribute to eventual responses.
“The efforts of our collaboration generated fundamental insights about the brain-wide circuits that support complex cognition,” said study author Anne Churchland at UCLA. “This is really exciting and a major step forward relative to the ‘piecemeal’ approach (1-2 brain areas at a time) that was previously the accepted method in the field.”
The International Brain Laboratory is releasing the entire database, with the goal of eventually understanding the brain’s computations within and across the brain regions behind decision-making. The dataset could shed light on neurological disorders with impaired decision-making, such as obsessive-compulsive disorders, Parkinson’s disease, and addiction.
The post In a First, Scientists Record Decision-Making as It Happens Across a Whole Mouse Brain appeared first on SingularityHub.
2025-09-10 07:10:10
Scientists want to know if a biohybrid robot can form a long-lasting biological “mind” to direct movement.
It’s a bizarre sight: With a short burst of light, a sponge-shaped robot scoots across a tiled surface. Flipped on its back, it repeatedly twitches as if doing sit-ups. By tinkering with the light’s frequency, scientists can change how fast the strange critter moves—and how long it needs to “rest” after a long crawl.
Soft robots are nothing new, but the spongy bot stands out in that it blends living muscle and brain cells with a 3D-printed skeleton and wireless electronics. The neurons, genetically altered to respond to light, trigger neighboring muscles to contract or release.
Watching the robot crawl around is amusing, but the study’s main goal is to see if a biohybrid robot can form a sort of long-lasting biological “mind” that directs movement. Neurons are especially sensitive cells that rapidly stop working or even die outside of a carefully controlled environment. Using blob-like amalgamations of different types of neurons to direct muscles, the sponge-bots retained their crawling ability for over two weeks.
Scientists have built biohybrid bots that use electricity or light to control muscle cells. Some mimic swimming, walking, and grabbing motions. Adding neurons could further fine-tune their activity and flexibility and even bestow a sort of memory for repeated tasks.
These biohybrid bots offer a unique way to study motion, movement disorders, and drug development without lab animals. Because their components are often compatible with living bodies, they could be used for diagnostics, drug delivery, and other medical scenarios.
The word robot often conjures images of Terminator’s metal T-800. Soft robots have the potential to be far more flexible and agile. Being able to slightly deform lets them squeeze through tiny spaces, monitor fragile ecosystems like coral reefs, explore the deep sea, and potentially snake through the body with minimal damage to surrounding tissues.
In addition to synthetic materials and mechanisms, another way to build soft robots is inspired by nature. From blue whales to rodents and humans—all rely on similar biological machinery to move. Motor neurons in muscles receive directions from the brain and spinal cord. They then release chemicals that trigger muscles to contract or relax.
The process is energy efficient and rapidly adapts to sudden changes in the environment—like stepping over an unexpected doorstep instead of tripping. Though today’s robots are getting more agile, they still struggle with unexpected landmines in uneven terrain. Adding neuromuscular junctions could lead to more precise and efficient robots.
Last year, in a proof of concept, one team engineered a swimming “stingray” bot using stem cell-derived neurons, heart muscle cells, and an electronic “brain.” Scientists combined the cells, and brain with an artificial skeleton to make a soft robot that could flap its fins and roam a swimming pool.
There was a surprise too—the junctions between the two cell types developed electrical synapses. Usually, neurons release chemicals to direct muscle movements. These connections are called chemical synapses. While electrical networks are faster, they’re generally less adaptable.
The new study aimed to create chemical synapses in robots.
The team first 3D printed a skeleton shaped roughly like a figure eight, but with a wider middle section. Each side formed a trough with one side slightly deeper than the other. The troughs were intended to function as legs. The researchers then embedded muscle cells from mice in a nutritious gel contained in each trough. After five days, the cells had formed slivers of muscle capable of contracting throughout the legs.
The robot’s “brain” sat in the middle part of the figure eight. The team made tiny blobs of neural tissue, called neurospheres, out of stem cells genetically engineered to activate with light. The blobs contained a mix of brain cells, including motor neurons to control muscles.
The neurospheres connected with muscle tissue days after transplantation. The cells formed neuromuscular junctions similar in form and function to those in our bodies, and the biohybrid robots began pumping out chemicals that control muscle function.
Then came an electronic touch. The team added a hub to wirelessly detect light pulses, harvest power, and drive five tiny micro-LED lights to change brain cell activity and translate it into movement.
The robot moved at turtle speed, roughly 0.8 millimeters per minute. However, the legs twitched in tandem throughout the trials, suggesting the neurons and muscles formed a sort of synchrony in their connections.
Surprisingly, some bots kept moving even after turning off the light, while other “zombie” bots spontaneously moved on their own. The team is still digging into why this happens. But differences in performance were expected—living components are far less controllable than inorganic parts.
Like after tough workout, the robots also needed breaks. And when flipped on their backs, their legs moved for roughly two weeks but then failed. This is likely due to a buildup of metabolic toxins, which gradually accumulate inside the robots, but the team is looking for the root cause.
Despite their imperfections, the bots are essentially built from living mini neural networks and tissue connected to electronics—true cyborgs. They “provide a valuable platform for understanding…the emergent behaviors of neurons and neuromuscular junctions,” wrote the team.
The researchers are now planning to explore different skeletons and monitor behavior to fine-tune control. Adding more advanced features like sensory feedback and a range of muscle structures could help the bots further mimic the agility of our nervous system. And multiple neural “centers,” like in sea creatures, could control different muscles in robots that look nothing like us.
The post This Crawling Robot Is Made With Living Brain and Muscle Cells appeared first on SingularityHub.
2025-09-09 07:08:42
Thanks to AI and an electrode-studded cap, participants controlled a robotic arm with just their thoughts.
A host of tech startups are racing to build brain implants, but there may be limits to how widely such invasive technology can be adopted. New research shows that pairing AI with less invasive brain-computer interfaces could provide another promising direction.
The cutting-edge brain implants being developed by companies like Neuralink and Precision Neuroscience are initially aimed at medical applications. But techno-optimists also hope that in the future this technology could be used by everyday people to boost cognition, control technology with their thoughts, and even merge their minds with AI.
But implanting these devices requires risky brain surgery and can lead to immune reactions that degrade an implant’s performance or even require it be removed. When treating serious disabilities or diseases these risks can often be justified, but the calculus is trickier for healthy people with no real medical need.
There are less invasive brain interfaces that record electrical signals from outside the skull, but they are typically much less accurate at detecting brain signals. Now, researchers from the University of California, Los Angeles have shown that combining these devices with an “AI copilot” can dramatically boost performance and even allow people to control a robotic arm.
“We’re aiming for much less risky and invasive avenues,” Jonathan Kao, who led the research, said in a press release. “Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”
The non-invasive device the researchers used in their experiments was a cap featuring 64 electrodes designed to capture electroencephalography, or EEG, signals. They developed a custom algorithm to decode these signals, which they then combined with AI copilots designed for specific tasks. The system was tested by four study participants, one of whom was paralyzed from the waist down.
The first task was moving a cursor on a computer screen to hover over eight different targets for at least half a second. Using reinforcement learning, the team trained the AI copilot to infer what target the user was aiming for by looking at inputs from the EEG decoder and position data from the targets and cursor. The copilot then used this information help steer the cursor in the right direction.
In a paper in Nature Machine Intelligence, the researchers report that the copilot boosted the success rate of the healthy participants by a factor of two compared to using the interface without AI, while the paralyzed participant saw their success rate quadruple.
The researchers then had users control a robotic arm with the interface to move four colored blocks on a table to randomly placed markers. The copilot for this task worked on similar principles but used a camera feed to detect the position of blocks and targets on the table.
With the copilot’s aid, the healthy participants solved the task significantly faster. The paralyzed participant was unable to complete the task without help from the copilot, but once it was activated, they were successful 93 percent of the time.
The researchers say the study shows this kind of “shared autonomy” approach—where AI and brain interface users collaborate to solve tasks—can significantly boost the performance of non-invasive technology. They also suggest it could improve invasive implants as well.
In fact, Neuralink is already experimenting with similar approaches. Earlier this year, MIT Technology Review reported that one of the company’s test subjects was using the AI chatbot Grok to help draft messages and speed up the rate at which he could communicate.
However, Mark Cook at the University of Melbourne in Australia told Nature that researchers need to be careful about how much control is given to the AI in these kinds of setups. “Shared autonomy must not come at the cost of user autonomy, and there is a risk that AI interventions could override or misinterpret user intent,” he said.
Nonetheless, it seems the dream of brain-computer interfaces allowing AI and human minds to interact more seamlessly may be arriving ahead of schedule.
The post An AI Copilot Quadrupled the Performance of This Wearable Brain-Reading Device appeared first on SingularityHub.
2025-09-06 22:00:00
This Robot Only Needs a Single AI Model to Master Humanlike MovementsWill Knight | Wired
“The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI.”
‘World Models,’ an Old Idea in AI, Mount a ComebackJohn Pavlus | Quanta Magazine
“You’re carrying around in your head a model of how the world works. …The deep learning luminaries Yann LeCun (of Meta), Demis Hassabis (of Google DeepMind), and Yoshua Bengio (of Mila, the Quebec Artificial Intelligence Institute) all believe world models are essential for building AI systems that are truly smart, scientific and safe.”
Synthesia’s AI Clones Are More Expressive Than Ever. Soon They’ll Be Able to Talk Back.Rhiannon Williams | MIT Technology Review
“This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?”
Anthropic to Pay at Least $1.5 Billion in Landmark Copyright SettlementMelissa Korn and Jeffrey A. Trachtenberg | The Wall Street Journal
“The settlement could influence the outcome of pending litigation between other media companies and AI firms, and may push the tech companies to seek licensing agreements with content owners whose works are considered vital for training purposes.”
Why Anthropic’s Coding Prediction Hasn’t Panned OutStephanie Palazzolo and Rocket Drew | The Information
“In March, Anthropic CEO Dario Amodei predicted that AI would be writing 90% of all code in three to six months. It’s been over six months since then, so how does Amodei’s prediction hold up? We asked Anthropic’s chatbot Claude. ‘Grade: F (Failed Prediction),’ begins Claude’s answer. ‘The prediction that AI would be writing 90% of all code within 3-6 months was wildly off the mark.'”
Cutting-Edge AI Was Supposed to Get Cheaper. It’s More Expensive Than Ever.Christopher Mims | The Wall Street Journal
“The latest AI models are doing more ‘thinking,’ especially when used for deep research, AI agents, and coding. So while the price of a unit of AI, known as a token, continues to drop, the number of tokens needed to accomplish many tasks is skyrocketing. It’s the opposite of what many analysts and experts predicted even a few months ago.”
Apple Is Working on AI-Powered Search EngineAaron Tilley | The Information
“The company is planning to release the web-search feature alongside its delayed Siri revamp in the spring of next year, Bloomberg also reported. With the search tool, Siri would be more capable of looking up information across the web without linking to external services.”
Waymo Expands to Denver and Seattle With Its Zeekr-Made VansSean O’Kane | TechCrunch
“The new cities join a growing list of places where Waymo is operating in the US. Just last week the company announced that it has more than 2,000 robotaxis in its commercial fleet countrywide, with 800 in the San Francisco Bay Area, 500 in Los Angeles, 400 in Phoenix, 100 in Austin, and ‘dozens’ in Atlanta. Waymo has also announced plans to launch a commercial robotaxi services in Dallas, Miami, and Washington, DC, next year, and recently received a permit to start testing in New York City.”
The Less You Know About AI, the More You Are Likely to Use ItHeidi Mitchell | The Wall Street Journal
“When it comes to most new technologies, early adopters tend to be the people who know and understand the tools the best. With artificial intelligence, the opposite seems to be true. This counterintuitive finding comes from new research, which suggests that the people most drawn to AI tend to be those who understand the technology the least.”
How Tech Giants Are Spreading the Risk of the AI BuildoutMiles Kruppa | The Information
“The speed and scale of the AI buildout is now forcing [companies] to find outside sources of capital, a sign of how the costs of AI are weighing on even the largest tech companies as they outline plans to spend upward of $100 billion annually on new buildings and equipment.”
Should AI Get Legal Rights?Kylie Robison | Wired
“In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights.”
The post This Week’s Awesome Tech Stories From Around the Web (Through September 6) appeared first on SingularityHub.
2025-09-06 05:46:06
Scientists mapped the timing of plant growth cycles around the globe with 20 years of satellite imagery.
The annual clock of the seasons—winter, spring, summer, autumn—is often taken as a given. But our new study in Nature, using a new approach for observing seasonal growth cycles from satellites, shows that this notion is far too simple.
My colleagues and I present an unprecedented and intimate portrait of the seasonal cycles of Earth’s land-based ecosystems. This reveals “hotspots” of seasonal asynchrony around the world—regions where the timing of seasonal cycles can be out of sync between nearby locations.
We then show these differences in timing can have surprising ecological, evolutionary, and even economic consequences.
The seasons set the rhythm of life. Living things, including humans, adjust the timing of their annual activities to exploit resources and conditions that fluctuate through the year.
The study of this timing, known as phenology, is an age-old form of human observation of nature. But today, we can also watch phenology from space.
With decades-long archives of satellite imagery, we can use computing to better understand seasonal cycles of plant growth. However, methods for doing this are often based on the assumption of simple seasonal cycles and distinct growing seasons.
This works well in much of Europe, North America, and other high-latitude places with strong winters. However, this method can struggle in the tropics and in arid regions. Here, satellite-based estimates of plant growth can vary subtly throughout the year, without clear-cut growing seasons.
By applying a new analysis to 20 years of satellite imagery, we made a better map of the timing of plant growth cycles around the globe. Alongside expected patterns, such as delayed spring at higher latitudes and altitudes, we saw more surprising ones too.
One surprising pattern happens across Earth’s five Mediterranean climate regions, where winters are mild and wet and summers are hot and dry. These include California, Chile, South Africa, southern Australia, and the Mediterranean itself.
These regions all share a “double peak” seasonal pattern, previously documented in California, because forest growth cycles tend to peak roughly two months later than other ecosystems. They also show stark differences in the timing of plant growth from their neighboring drylands, where summer precipitation is more common.
This complex mix of seasonal activity patterns explains one major finding of our work: The Mediterranean climates and their neighboring drylands are hotspots of out-of-sync seasonal activity. In other words, they are regions where the seasonal cycles of nearby places can have dramatically different timing.
Consider, for example, the marked difference between Phoenix, Arizona (which has similar amounts of winter and summer rainfall) and Tucson only 160 kilometers away (where most rainfall comes from the summer monsoon).
Other global hotspots occur mostly in tropical mountains. The intricate patterns of out-of-sync seasons we observe there may relate to the complex ways in which mountains can influence airflow, dictating local patterns of seasonal rainfall and cloud. These phenomena are still poorly understood, but may be fundamental to the distribution of species in these regions of exceptional biodiversity.
Identifying global regions where seasonal patterns are out of sync was the original motivation for our work. And our finding that they overlap with many of Earth’s biodiversity hotspots—places with large numbers of plant and animal species—may not be a coincidence.
In these regions, because seasonal cycles of plant growth can be out of sync between nearby places, the seasonal availability of resources may be out of sync, too. This would affect the seasonal reproductive cycles of many species, and the ecological and evolutionary consequences could be profound.
One such consequence is that populations with out-of-sync reproductive cycles would be less likely to interbreed. As a result, these populations would be expected to diverge genetically and, perhaps, eventually even split into different species.
If this happened to even a small percentage of species at any given time, then over the long haul these regions would produce large amounts of biodiversity.
We don’t yet know whether this has really been happening. But our work takes the first steps towards finding out.
We show that, for a wide range of plant and animal species, our satellite-based map predicts stark on-ground differences in the timing of plant flowering and in genetic relatedness between nearby populations.
Our map even predicts the complex geography of coffee harvests in Colombia. Here, coffee farms separated by a day’s drive over the mountains can have reproductive cycles as out of sync as if they were a hemisphere apart.
Understanding seasonal patterns in space and time isn’t just important for evolutionary biology. It is also fundamental to understanding the ecology of animal movement, the consequences of climate change for species and ecosystems, and even the geography of agriculture and other forms of human activity.
Want to know more? You can explore our results in more detail with this interactive online map.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Watch Earth’s Seasons Change the Face of the Planet in a New Animated Map appeared first on SingularityHub.