Gram-scale StarChip components | 4 processors

Apr 12, 2016 20:37 Steven Kelly Posted on: Breakthrough Initiatives

In addition to the processors, I think you need to consider the software. These days we can make processors reliable, but software not so much. With a 20 year journey and 4 year return message time, it's worth having the software be as smart as possible to make the most of the opportunity autonomously. And yet software is maybe the only part that can't be made redundant by sending many craft. That combination forms a significant challenge. I don't think we've met challenges like that so far: a glitch on New Horizons was acceptable, with a 4½ hour message delay time and an active mission time of a week; at 4½ years' delay and an active mission time of an hour, it's a bit trickier. One way to keep complexity down and improve reliability would be Domain-Specific Modeling - it's been great to see its success on some previous NASA missions, hopefully it can help here too.

Apr 12, 2016 20:58 Nick Kamaris Posted on: Breakthrough Initiatives

The importance of reliable software should not be underestimated.

Apr 13, 2016 01:47 Eric Fallabel Posted on: Breakthrough Initiatives

On the bright side, software doesn't weigh much, and there is ample time available to develop a robust architecture.

Apr 13, 2016 20:54 Karl Loh Posted on: Breakthrough Initiatives

I suggest self-assembly of little probes at the destination, resulting in a more capable probe. Based on the situation and environment, "brain-cell" probes would differentiate themselves, as cells do, to assume different functions. I suggest that the "OS" for this and the interconnects allow for future generations of more capable probes to harness the capabilities of earlier probes.

Apr 14, 2016 22:23 Chris Maurer Posted on: Breakthrough Initiatives

To add to Karl's comment, you can make the probes 'redundant' from a code perspective by allowing them to communicate among each other. They may not necessarily share code, but they could share computations, and results of computations. The Brain Cell concept would fit nicely into something like a neural network machine learning model. As a group of probes, they would need to collectively learn on their own, so they could self-correct whatever is needed on the journey. The probes themselves could create a collective neural net.

Apr 17, 2016 05:51 john.hayden1@gmail.com Posted on: Breakthrough Initiatives

Moore's Law may not be the feature size limiter. Particle implantation damage becomes more serious to smaller geometry devices - already, at 55nm and below (16nm is state of the art), single particles can disrupt 4-8 adjacent SRAM cells in the friendly terrestrial environment - which requires interleaving and muxing of data words in the physical SRAM layout even with ECC.

So: smaller feature size requires increasing redundancy, which increases area. At some point these may balance, i.e. reductions in features require bigger total survivable chip area.

...

Apr 17, 2016 05:56 john.hayden1@gmail.com Posted on: Breakthrough Initiatives

... so algorithmic / architectural methods must be devised, beyond simple redundancy, so that the design function is insensitive to expected faults at these high energies.

Some local faults could be crippling - power-to-ground shorts e.g. - so the processors must be designed with architectural, physical partitioning as part of the safety measures.

Apr 18, 2016 19:14 Mitch Fagen Posted on: Breakthrough Initiatives

What about the effect the acceleration might have on the electronic components and connections? I've read in news stories about this proposal that the acceleration is estimated to be approximately 60,000 g. Is it possible to make microchips, a motherboard, a camera, etc., that could survive such a force?

Apr 23, 2016 23:20 michael.million@sky.com Posted on: Breakthrough Initiatives

Nano-electronics can easily survive a million g's if arranged in a series of dots, contacts may need to be flexible i.e. like a two sided bridge which bends breaking in the middle under g's but closes when not.

Aug 01, 2016 13:43 Breakthrough Initiatives Posted on: Breakthrough Initiatives

Apr 12, 2016 20:37 Steven Kelly Posted on: Breakthrough Initiatives
"In addition to the processors, I think you need to consider the software. These days we can make processors reliable, but software not so much. With a 20 year journey and 4 year return message time, it's worth having the software be as smart as possible to make the most of the opportunity autonomously. And yet software is maybe the only part that can't be made redundant by sending many craft. That combination forms a significant challenge. I don't think we've met challenges like that so far: a glitch on New Horizons was acceptable, with a 4½ hour message delay time and an active mission time of a week; at 4½ years' delay and an active mission time of an hour, it's a bit trickier. One way to keep complexity down and improve reliability would be Domain-Specific Modeling - it's been great to see its success on some previous NASA missions, hopefully it can help here too."


Answer:
This is an excellent point. The nanocraft will need to be highly autonomous in order to cope with the extreme communication latency. Thankfully, the tasks they need to perform during the crucial one-hour flyby are not terribly complicated and are well within the capabilities of present-day autonomous systems. As you have pointed out, software testing will be crucial.

– Zac Manchester, Breakthrough Initiatives

Comments: 28

Pages:

Please sign in to be able to add new comments.