In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’.At first glance, this appears astounding – a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video’s initial release on X by cofounder Alex Wissner-Gross, Eon’s CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”.Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg’s post. If Eon had indeed succeeded in uploading a fly – a goal previously thought to be likely decades away according to much of the fly neuroscience community – they’d need more than a video to prove it.Did the upload show evidence of known neurophysiological markers of working memory, such as the head-direction ring attractor bump? How did their brain model actually control the virtual fly body, given it seemed to lack a modeled spinal cord? Where was the data and the write-up?Because if Eon couldn’t back up what their video seemed to show, at least some neuroscientists were going to be markedly less than impressed:Eon did follow up with a blog post – How the Eon Team Produced A Virtual Embodied Fly – detailing how they combined pre-existing models of the fly brain and body into a system that could respond to virtual environmental cues. But for the neuroscientists scrutinising the uploading claim, these details only sharpened their objections – so much so that some are accusing Eon of misleading conduct and gross misrepresentation.To understand just why these scientists are so upset, you need a bit of context.A brief history of fruit fly connectomicsThe fruit fly Drosophila melanogaster has been a workhorse of neuroscience for decades; its brain small enough to be tractable but complex enough to produce genuinely interesting behaviour such as learning, navigation, decision-making, and courtship. A long-running ambition within the community has been to map the complete wiring diagram – a ‘connectome’ – of that brain, and in October 2024, after years of incremental progress, the FlyWire Consortium achieved it: a complete connectome of the adult fly brain, documenting all 139,255 neurons and over 50 million synaptic connections.These increasingly complete connectomes have enabled the creation of increasingly elaborate computational models. In 2024, Shiu et al. published a model of the entire adult fly brain in which every neuron and neural connection was represented, albeit in highly simplified form (ignoring differences in cell shape, neurotransmitter dynamics, and much else). Despite these simplifications, the model could predict which neurons activate in response to sensory stimuli and identify pathways underlying behaviors like feeding and grooming, a striking demonstration that wiring alone carries substantial information about function. Separately, Lappalainen et al. built a ‘connectome-constrained’ model of the fly’s visual system, whose predictions matched real neural recordings across dozens of experiments.Meanwhile, other researchers had built NeuroMechFly, a biomechanical simulation of the adult fly body based on micro-CT scans of real anatomy. Updated to a second version in late 2024, the new virtual fly body could walk, groom, or be trained via reinforcement learning to navigate through virtual environments. Crucially, it could also be reprogrammed to be driven by any other kind of external controller.One of the videos in the NeuroMechFly v2 publication, demonstrating a ‘hierarchical sensorimotor task in [a] closed loop’. There’s no connectome involved here, yet it is still remarkably similar behavior to the Eon demo.By early 2025, the pieces Eon needed for their demo were largely in place: a complete brain connectome, computational models of both the central brain and the visual system, and a detailed biomechanical body model. All that remained was to wire them together.So, what did Eon actually do?Eon took the pre-existing components we just described – the Shiu et al. brain model and the NeuroMechFly v2 body – and connected them together into a closed loop: sensory events in a virtual world feed into the brain model, and selected outputs from the brain model direct the virtual body.The loop has four steps. First, something happens in the virtual environment – the fly’s leg contacts a sugar source, or dust accumulates on its antennae – and these events activate specific sensory neurons in the brain model. Second, the brain model runs for a 15-millisecond time step, propagating activity through the connectome’s ~140,000 simplified digital neurons. Third, Eon reads out the activity of a small, hand-picked set of descending neurons and translates it into high-level commands – turn left, walk forward, groom, feed – that are passed to pre-trained motor controllers in the body model. Fourth, the body moves, changing what the fly senses, and the loop repeats.The result is the video that went viral. But the behaviors on screen are less impressive than they appear, because the brain model is doing far less of the work than a viewer would naturally assume.Take the walking. The brain model does not orchestrate the fly’s legs. It doesn’t compute the gait cycle, coordinate the six limbs, or position the joints. It activates a few descending neurons – oDN1 for forward velocity, DNa01/DNa02 for steering – and hands that signal off to a locomotion controller within NeuroMechFly that already knows how to walk. The brain is issuing something like a “go forward” or “turn left” instruction; the body model handles everything else. In a biological fly, the detailed work of translating such commands into coordinated leg movements is performed by ~15,000 neurons in the ventral nerve cord (the fly’s equivalent of a spinal cord), none of which are simulated here. The same applies to grooming: the connectome selects the behavior, but NeuroMechFly’s controllers execute it.In their blog post, Eon are open about this. They compare the descending neurons to a car’s steering wheel, accelerator, and brake – you can predict what the car will do from these controls “without explicitly simulating every combustion event inside the engine.” They also acknowledge that the visual system activity displayed so prominently in the video – derived from the Lappalainen model – is “somewhat decorative” and does not substantially drive behavior. They do note that the brain-body mappings are in some cases “somewhat arbitrarily chosen by hand.” And they explicitly state the work “should not yet be interpreted as a proof that structure alone is sufficient to recover the entire behavioral repertoire of the fly.”This is fair enough, and their efforts to connect brain and body models are genuinely useful engineering. If Eon had described this as “the first integration of connectome-constrained brain and body models into a closed sensorimotor loop”, nobody in the fly neuroscience community would have objected.But they didn’t say that. They said “We’ve uploaded a fruit fly.” Transparency in a blog post that few will read doesn’t undo a headline that millions saw. The typical person who encounters a claim on X, watches the video, and sees a fly walking, grooming, and feeding while a digital brain flickers alongside it is probably not going to think “a simplified brain model is selecting from a small menu of pre-programmed behaviors via a hand-tuned interface.” They’re likely to think the fly has been faithfully recreated inside a computer.It hasn’t. Eon’s virtual fly implements only a handful of behaviors, and those rely heavily on NeuroMechFly’s pre-trained controllers rather than on the connectome. This is the most fundamental problem with the demo as evidence of an upload: because the body model already knows how to walk, groom, and feed, almost any signal that triggers the right controller at the right time will produce fly-like behavior on screen. You could replace the connectome with a simple rule-based script – if dust, groom; if sugar, feed; otherwise, walk forward – and the resulting video would look much the same. The fly-like behavior the viewer sees is a product of the body model, not the brain. The digitized connectome may be producing meaningful internal dynamics, but this demo cannot tell us whether it is.What would actually count as uploading a fly?So if what Eon built isn’t an upload, what would be?The word ‘upload’ carries a claim that ‘model’ and ‘simulation’ do not. When one says they’ve modeled or simulated a fly, they’re saying they’ve captured some elements of the original insect’s behaviour, but with significant simplifications and assumptions. If instead they say they’ve uploaded a fly, they’re making a claim about the fly itself: that its identity has been faithfully transferred into a new medium, that the thing in the computer in some sense is the fly, just running on a different substrate. When you upload a photo, the file on your computer is the photo. Nobody says “I’ve partially uploaded this photo” to mean “I’ve made a rough sketch inspired by it.”An uploaded fly, then, should be able to do everything the original fly could do. It should be playable forward in time indefinitely, responding to novel situations as the original would have. It should serve as a faithful proxy for the real thing; so much so that a neuroscientist could peer inside, observe realistic equivalents of neurophysiology, and run experiments that would be impractical or impossible on a biological fly, with confidence that the results would generalise back.The leading proposal for how to actually achieve this is whole brain emulation: faithfully recreating the brain’s causal mechanisms at whatever level of detail turns out to be necessary so that the digital system behaves identically to the original. This is what distinguishes emulation from simulation. A weather simulation is useful – it can predict next week’s temperature with reasonable accuracy – but it breaks down when pushed further out, because its approximations are coarser than the actual atmospheric processes of real weather. In contrast, one can run an emulation of the Nintendo 64 game Banjo-Kazooie on a laptop, and because the emulator faithfully recreates the logic of the N64’s hardware – the processor, the memory, the graphics pipeline – the game will never fail to behave as it would have on the original console.It’s currently an open scientific question what level of biological detail an emulation needs to capture. It’s unlikely we’d need to simulate every ion channel, and perhaps much of the brain’s physiology could be simplified with no consequence. But the key feature of the emulation approach is the guarantee: if you’ve faithfully recreated the causal mechanisms down to the necessary level, the resulting behaviour is trustworthy by construction. Low-fidelity approaches might produce correct-looking behavior in some cases, but it’s hard to tell to what degree this will generalise to novel situations.In response to this line of criticism, Michael Andregg has argued that uploading shouldn’t be considered so binary. “I don’t think of uploading as a binary concept” he told The Verge, outlining “different levels” of upload. By this logic then, Eon’s system – containing connectome-derived elements driving behavior in a virtual body – might qualify as a ‘partial upload’.But if a connectome-constrained model can count as a ‘partial upload’, then the Shiu et al., brain model was already a partial upload before Eon touched it. So was the Lappalainen visual model. So, for that matter, is any computational neuroscience model that incorporates anatomical connectivity data. The word ‘upload’ loses its distinctive meaning, and the field loses its ability to communicate what it is actually trying to achieve and how far away a true fly upload still is.Still loadingWhen the vocabulary of breakthroughs is spent on incremental demos, the actual breakthroughs are cheapened when they arrive. Funders and the public lose the ability to distinguish genuine milestones from slick demos, and investment flows towards groups making the boldest claims rather than those doing the most foundational work. Worse, for a field that is struggling to graduate from science fiction to serious research, premature claims risk triggering the cycle of hype and disillusionment that has set back other ambitious programs before.To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines – not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”But the history of science – and the gap between what Eon demonstrated and what uploading actually requires – suggests that there is likely no shortcut through the long slog ahead.Because in all probability, before anyone can truthfully claim to have uploaded a fly, there will still need to be years more of tedious work. Countless painstaking patch clamping experiments of carefully guiding a glass electrode into a single neuron while keeping it alive, just to learn how that one cell type, out of the fly brain’s thousands, transforms its inputs into outputs. Endless sessions of pinning flies under two-photon microscopes, collecting calcium imaging data while the animals walk or groom or navigate an odor plume, slowly building up ground-truth measurements of what real brain activity actually looks like during real behavior. Thousands of hours still to come of building computational models, testing them against that data, failing, and refining them again.Then, and very likely only then, will there come a day when someone will hit ‘run’, and a fly – disoriented in whatever way a fly can be, having been sitting in a vial a moment ago – will find itself somewhere unfamiliar. It won’t know that in the intervening time, it had been anesthetised, embedded in resin, and its brain sliced into thousands of thin sections. It won’t know that those sections were painstakingly imaged, or that its neural architecture was reconstructed from those images, or that thousands of its fellow flies were studied and sacrificed to fill in what images alone couldn’t tell us. It won’t know of the billions of dollars and thousands of careers that it took to reach this point, or the millions of hours spent staring down microscopes, handling vials, and debugging code. It will certainly never know that it was once made of proteins and cells, and is now made of silicon and mathematics.It will just beat its wings, lift off, and search for fruit.Discuss Read More
No, we haven’t uploaded a fly yet
In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’.At first glance, this appears astounding – a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video’s initial release on X by cofounder Alex Wissner-Gross, Eon’s CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”.Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg’s post. If Eon had indeed succeeded in uploading a fly – a goal previously thought to be likely decades away according to much of the fly neuroscience community – they’d need more than a video to prove it.Did the upload show evidence of known neurophysiological markers of working memory, such as the head-direction ring attractor bump? How did their brain model actually control the virtual fly body, given it seemed to lack a modeled spinal cord? Where was the data and the write-up?Because if Eon couldn’t back up what their video seemed to show, at least some neuroscientists were going to be markedly less than impressed:Eon did follow up with a blog post – How the Eon Team Produced A Virtual Embodied Fly – detailing how they combined pre-existing models of the fly brain and body into a system that could respond to virtual environmental cues. But for the neuroscientists scrutinising the uploading claim, these details only sharpened their objections – so much so that some are accusing Eon of misleading conduct and gross misrepresentation.To understand just why these scientists are so upset, you need a bit of context.A brief history of fruit fly connectomicsThe fruit fly Drosophila melanogaster has been a workhorse of neuroscience for decades; its brain small enough to be tractable but complex enough to produce genuinely interesting behaviour such as learning, navigation, decision-making, and courtship. A long-running ambition within the community has been to map the complete wiring diagram – a ‘connectome’ – of that brain, and in October 2024, after years of incremental progress, the FlyWire Consortium achieved it: a complete connectome of the adult fly brain, documenting all 139,255 neurons and over 50 million synaptic connections.These increasingly complete connectomes have enabled the creation of increasingly elaborate computational models. In 2024, Shiu et al. published a model of the entire adult fly brain in which every neuron and neural connection was represented, albeit in highly simplified form (ignoring differences in cell shape, neurotransmitter dynamics, and much else). Despite these simplifications, the model could predict which neurons activate in response to sensory stimuli and identify pathways underlying behaviors like feeding and grooming, a striking demonstration that wiring alone carries substantial information about function. Separately, Lappalainen et al. built a ‘connectome-constrained’ model of the fly’s visual system, whose predictions matched real neural recordings across dozens of experiments.Meanwhile, other researchers had built NeuroMechFly, a biomechanical simulation of the adult fly body based on micro-CT scans of real anatomy. Updated to a second version in late 2024, the new virtual fly body could walk, groom, or be trained via reinforcement learning to navigate through virtual environments. Crucially, it could also be reprogrammed to be driven by any other kind of external controller.One of the videos in the NeuroMechFly v2 publication, demonstrating a ‘hierarchical sensorimotor task in [a] closed loop’. There’s no connectome involved here, yet it is still remarkably similar behavior to the Eon demo.By early 2025, the pieces Eon needed for their demo were largely in place: a complete brain connectome, computational models of both the central brain and the visual system, and a detailed biomechanical body model. All that remained was to wire them together.So, what did Eon actually do?Eon took the pre-existing components we just described – the Shiu et al. brain model and the NeuroMechFly v2 body – and connected them together into a closed loop: sensory events in a virtual world feed into the brain model, and selected outputs from the brain model direct the virtual body.The loop has four steps. First, something happens in the virtual environment – the fly’s leg contacts a sugar source, or dust accumulates on its antennae – and these events activate specific sensory neurons in the brain model. Second, the brain model runs for a 15-millisecond time step, propagating activity through the connectome’s ~140,000 simplified digital neurons. Third, Eon reads out the activity of a small, hand-picked set of descending neurons and translates it into high-level commands – turn left, walk forward, groom, feed – that are passed to pre-trained motor controllers in the body model. Fourth, the body moves, changing what the fly senses, and the loop repeats.The result is the video that went viral. But the behaviors on screen are less impressive than they appear, because the brain model is doing far less of the work than a viewer would naturally assume.Take the walking. The brain model does not orchestrate the fly’s legs. It doesn’t compute the gait cycle, coordinate the six limbs, or position the joints. It activates a few descending neurons – oDN1 for forward velocity, DNa01/DNa02 for steering – and hands that signal off to a locomotion controller within NeuroMechFly that already knows how to walk. The brain is issuing something like a “go forward” or “turn left” instruction; the body model handles everything else. In a biological fly, the detailed work of translating such commands into coordinated leg movements is performed by ~15,000 neurons in the ventral nerve cord (the fly’s equivalent of a spinal cord), none of which are simulated here. The same applies to grooming: the connectome selects the behavior, but NeuroMechFly’s controllers execute it.In their blog post, Eon are open about this. They compare the descending neurons to a car’s steering wheel, accelerator, and brake – you can predict what the car will do from these controls “without explicitly simulating every combustion event inside the engine.” They also acknowledge that the visual system activity displayed so prominently in the video – derived from the Lappalainen model – is “somewhat decorative” and does not substantially drive behavior. They do note that the brain-body mappings are in some cases “somewhat arbitrarily chosen by hand.” And they explicitly state the work “should not yet be interpreted as a proof that structure alone is sufficient to recover the entire behavioral repertoire of the fly.”This is fair enough, and their efforts to connect brain and body models are genuinely useful engineering. If Eon had described this as “the first integration of connectome-constrained brain and body models into a closed sensorimotor loop”, nobody in the fly neuroscience community would have objected.But they didn’t say that. They said “We’ve uploaded a fruit fly.” Transparency in a blog post that few will read doesn’t undo a headline that millions saw. The typical person who encounters a claim on X, watches the video, and sees a fly walking, grooming, and feeding while a digital brain flickers alongside it is probably not going to think “a simplified brain model is selecting from a small menu of pre-programmed behaviors via a hand-tuned interface.” They’re likely to think the fly has been faithfully recreated inside a computer.It hasn’t. Eon’s virtual fly implements only a handful of behaviors, and those rely heavily on NeuroMechFly’s pre-trained controllers rather than on the connectome. This is the most fundamental problem with the demo as evidence of an upload: because the body model already knows how to walk, groom, and feed, almost any signal that triggers the right controller at the right time will produce fly-like behavior on screen. You could replace the connectome with a simple rule-based script – if dust, groom; if sugar, feed; otherwise, walk forward – and the resulting video would look much the same. The fly-like behavior the viewer sees is a product of the body model, not the brain. The digitized connectome may be producing meaningful internal dynamics, but this demo cannot tell us whether it is.What would actually count as uploading a fly?So if what Eon built isn’t an upload, what would be?The word ‘upload’ carries a claim that ‘model’ and ‘simulation’ do not. When one says they’ve modeled or simulated a fly, they’re saying they’ve captured some elements of the original insect’s behaviour, but with significant simplifications and assumptions. If instead they say they’ve uploaded a fly, they’re making a claim about the fly itself: that its identity has been faithfully transferred into a new medium, that the thing in the computer in some sense is the fly, just running on a different substrate. When you upload a photo, the file on your computer is the photo. Nobody says “I’ve partially uploaded this photo” to mean “I’ve made a rough sketch inspired by it.”An uploaded fly, then, should be able to do everything the original fly could do. It should be playable forward in time indefinitely, responding to novel situations as the original would have. It should serve as a faithful proxy for the real thing; so much so that a neuroscientist could peer inside, observe realistic equivalents of neurophysiology, and run experiments that would be impractical or impossible on a biological fly, with confidence that the results would generalise back.The leading proposal for how to actually achieve this is whole brain emulation: faithfully recreating the brain’s causal mechanisms at whatever level of detail turns out to be necessary so that the digital system behaves identically to the original. This is what distinguishes emulation from simulation. A weather simulation is useful – it can predict next week’s temperature with reasonable accuracy – but it breaks down when pushed further out, because its approximations are coarser than the actual atmospheric processes of real weather. In contrast, one can run an emulation of the Nintendo 64 game Banjo-Kazooie on a laptop, and because the emulator faithfully recreates the logic of the N64’s hardware – the processor, the memory, the graphics pipeline – the game will never fail to behave as it would have on the original console.It’s currently an open scientific question what level of biological detail an emulation needs to capture. It’s unlikely we’d need to simulate every ion channel, and perhaps much of the brain’s physiology could be simplified with no consequence. But the key feature of the emulation approach is the guarantee: if you’ve faithfully recreated the causal mechanisms down to the necessary level, the resulting behaviour is trustworthy by construction. Low-fidelity approaches might produce correct-looking behavior in some cases, but it’s hard to tell to what degree this will generalise to novel situations.In response to this line of criticism, Michael Andregg has argued that uploading shouldn’t be considered so binary. “I don’t think of uploading as a binary concept” he told The Verge, outlining “different levels” of upload. By this logic then, Eon’s system – containing connectome-derived elements driving behavior in a virtual body – might qualify as a ‘partial upload’.But if a connectome-constrained model can count as a ‘partial upload’, then the Shiu et al., brain model was already a partial upload before Eon touched it. So was the Lappalainen visual model. So, for that matter, is any computational neuroscience model that incorporates anatomical connectivity data. The word ‘upload’ loses its distinctive meaning, and the field loses its ability to communicate what it is actually trying to achieve and how far away a true fly upload still is.Still loadingWhen the vocabulary of breakthroughs is spent on incremental demos, the actual breakthroughs are cheapened when they arrive. Funders and the public lose the ability to distinguish genuine milestones from slick demos, and investment flows towards groups making the boldest claims rather than those doing the most foundational work. Worse, for a field that is struggling to graduate from science fiction to serious research, premature claims risk triggering the cycle of hype and disillusionment that has set back other ambitious programs before.To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines – not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”But the history of science – and the gap between what Eon demonstrated and what uploading actually requires – suggests that there is likely no shortcut through the long slog ahead.Because in all probability, before anyone can truthfully claim to have uploaded a fly, there will still need to be years more of tedious work. Countless painstaking patch clamping experiments of carefully guiding a glass electrode into a single neuron while keeping it alive, just to learn how that one cell type, out of the fly brain’s thousands, transforms its inputs into outputs. Endless sessions of pinning flies under two-photon microscopes, collecting calcium imaging data while the animals walk or groom or navigate an odor plume, slowly building up ground-truth measurements of what real brain activity actually looks like during real behavior. Thousands of hours still to come of building computational models, testing them against that data, failing, and refining them again.Then, and very likely only then, will there come a day when someone will hit ‘run’, and a fly – disoriented in whatever way a fly can be, having been sitting in a vial a moment ago – will find itself somewhere unfamiliar. It won’t know that in the intervening time, it had been anesthetised, embedded in resin, and its brain sliced into thousands of thin sections. It won’t know that those sections were painstakingly imaged, or that its neural architecture was reconstructed from those images, or that thousands of its fellow flies were studied and sacrificed to fill in what images alone couldn’t tell us. It won’t know of the billions of dollars and thousands of careers that it took to reach this point, or the millions of hours spent staring down microscopes, handling vials, and debugging code. It will certainly never know that it was once made of proteins and cells, and is now made of silicon and mathematics.It will just beat its wings, lift off, and search for fruit.Discuss Read More
