Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: Flipside on February 18, 2014, 12:28:04 pm
-
http://www.bbc.co.uk/news/health-26224813
Using the impulses of one monkeys' brain, scientists are able to produce action in a separate sedated monkey.
This is interesting in several ways, not least of which being the use of the phrase 'Master Monkey'. It's controversial work, but with considerable ramifications for people with paralyses or nerve damage etc.
Not only that, it's also quite interesting from a psychological point of view, because it seems to show that, at least as far as the 'wiring' for major motor-functions are concerned, Red is always Red, as it were. This was pretty much a certainty anyway, but it's good to have more solid confirmation of it.
-
I assume the sedated one would need to be called Blaster Monkey?
-
With the lack of a better way of wording it, this brings us closer to "Sariff Industries", right?
-
Yes. Any working cybernetic control systems won't need to be specifically tuned to a person's own neural patterns, making it drastically easier to get a direct interface working and replicable on a mass scale.
-
That could be really helpful in both bionics development and powered exosuit control systems. Such a "standard neural template" would go a long way towards lowering the cost of such devices. I always hoped that I'll live to see a world where most disabilities are a thing of the past. With proper neural interfaces, it could be possible to completely replace malfunctioning (or nonfunctional) eyes, ears, limbs... As such, anything that increases our knowledge is that regard is a incredibly valuable.
-
I'd love cybernetic eyes, having **** meat ones with nearsightedness and astigmatism both.
-
I'd love cybernetic eyes, having **** meat ones with nearsightedness and astigmatism both.
I bet jg18 would like a pair of those right now, too.
-
I'd love cybernetic eyes, having **** meat ones with nearsightedness and astigmatism both.
This. :yes:
I would also like a math processor. But that's just me.
-
The research in the field has always shown that any brain-machine interface requires a machine learning phase to customize the robot's response to the controlling individual, usually via linear regression models. This is necessary, because neural circuits vary way too much at the cellular/network level for any standardized system to work. Machine learning works alright for motor control, and human subjects have already been able to control a robot arm (albeit slowly and clumsily) to feed themselves, etc.
However, the other way around is far harder to do. We're a long ways from direct sensory replacements such as eyes, since the retinas themselves do a crapload of image processing and multiplexing (with slightly different outputs depending on the individual) even before it reaches the optic nerve. They're much more like hardware accelerated jpeg encoders rather than digital camera sensors outputting bitmaps. That sort of signal is still way beyond replication at our current level understanding. While the brain is pretty plastic in learning how to interpret new inputs, there's a limit on the modality of what's coming in, so we can't easily shoehorn in webcams into our faces and hope it works.
-
just a small reminder that the technology already exists (http://www.smh.com.au/technology/sci-tech/bionic-eye-goes-live-in-world-first-by-australian-researchers-20120830-251nu.html).
-
Retinal implants are a far cry from actual prosthetic eyes, which was my point. Also, current retina implants are mostly turds, with something like a few hundred electrodes/"pixels".
-
Of course it's my no means a simple task, especially that an implant also needs to be unobtrusive. Especially retinal implants, as they're on the border of nanotechnology. A full ocular implant could probably afford to take up more space, but still would have to be stuffed into an incredibly small space. I believe such a small camera could be made, though, and a basic image processing system itself could probably fit, too. Still, there's the problem of interface, and another of powering and cooling the thing. Real-time, hi-res image processing is expensive, performance wise, and while a modern, dedicated subsystem could probably accomplish that, it's power drain and heat output are gonna be on-par with a gaming-grade GPU.
-
Well, the thing is with prosthetic sensory organs such as eyes is that we fall right back into that "Does everyone see Red as 'Red'" question again, particularly if you include the concept of not only receiving information but interpreting it as well. For example, whilst this monkeys mental patterns may be able to control a separate organ such as an arm, I'm not certain that feedback from that arm would be as easily interpretable by the brain that is controlling it. I suppose more testing will give clearer results.
-
Of course it's my no means a simple task, especially that an implant also needs to be unobtrusive. Especially retinal implants, as they're on the border of nanotechnology. A full ocular implant could probably afford to take up more space, but still would have to be stuffed into an incredibly small space. I believe such a small camera could be made, though, and a basic image processing system itself could probably fit, too. Still, there's the problem of interface, and another of powering and cooling the thing. Real-time, hi-res image processing is expensive, performance wise, and while a modern, dedicated subsystem could probably accomplish that, it's power drain and heat output are gonna be on-par with a gaming-grade GPU.
It's already feasible to fit it in the size of an eye. Also, since bandwidth wouldn't be a problem, there would be no image compression at all. The main processing struggle would actually be converting the camera data into neural impulses, which I can imagine is immensely complicated, but well within the power of a dedicated microprocessor's power.
-
I wasn't saying anything about compression. Quite the opposite, the image would have to be very high-res. However, the camera would have to process image data into a form usable by the brain (a considerable amount of work) and transmit it over the neurons (which we're still trying to figure out). Additionally, you'd have to power the camera and make sure it does not overheat while doing this. Both of those things usually require rather bulky devices which are not exactly easy to miniaturize. In fact, any sensory implant would probably run into this problem, because of high performance required to keep up with biological equivalent.
-
The eye has extremely good liquid cooling in the form of blood flow. The only reason you don't cook your retinas in minutes of being exposed to sun or even daylight is because blood carries away all the heat. The reason you don't want to look directly at the sun is because the fovea has no blood vessels in it, and has much ****tier cooling.
But yeah, interface problems and ****. Gimme dat neuropyzine.