Flyby | Pointing camera at target planet

During an encounter with an exoplanet, the nanocraft’s camera would need to rotate in order to image the target. If the planet were imaged with a focal plane area, it would be necessary to hold the planet still within the image plane during the integration time for exposure.

If an object-to-nanocraft distance of 1 AU and a velocity of 0.2c are assumed, an angular rate of (6x107m/s) / (1.5x1011 m) ~ 80 arc sec/s is needed. The nanocraft would traverse a distance of 1 AU in about 2500 seconds. To stabilize the image, the camera would need to “slew” at this rate.

The slewing would be accomplished either by using the photon thrusters or via motion in the focal plane. For example, using a 4-meter-scale camera array at a distance of 1 AU would provide image resolution of order 40km, assuming visible spectrum imaging. The resolution scale could be improved by closer approaches of less than 1 AU. Extremely efficient image compression and smart feature detection algorithms would be crucial.

Another challenge is locating the planet inside the Alpha Centauri system. This would be achieved through the use of ‘star trackers’ based on the star’s ephemeris (orbital position at specific times) and the planet’s position relative to the star. The star would be very bright and the planet reasonably bright. Dead reckoning navigation is being considered that would be updated by measurements of the location of the star and planet. Image recognition software would be required to decide which target had the highest value, and it currently seems the most promising form of rudimentary artificial intelligence for this mission.

Sep 29, 2020 00:22 Andres Rodriguez Posted on: Breakthrough Initiatives

May be this will be a silly idea, but couldn't you make the whole nanocraft's surface be covered by lower resolution cameras? Our knowledge of how to reconstruct images is getting better and better with the use of manifold learning, maybe the cameras could identify different levels of depth through different types of light in the spectrum. That way you wouldn't have to worry about pointing to the planet but basically reconstruct the relevant information with respect to the position of the nanocraft, and since you are recovering different level curves of the 3d image you might be able to translate that into a geometrical picture of what you want to see.

Sep 29, 2020 00:22 Andres Rodriguez Posted on: Breakthrough Initiatives

May be this will be a silly idea, but couldn't you make the whole nanocraft's surface be covered by lower resolution cameras? Our knowledge of how to reconstruct images is getting better and better with the use of manifold learning, maybe the cameras could identify different levels of depth through different types of light in the spectrum. That way you wouldn't have to worry about pointing to the planet but basically reconstruct the relevant information with respect to the position of the nanocraft, and since you are recovering different level curves of the 3d image you might be able to translate that into a geometrical picture of what you want to see.

Mar 30, 2021 21:54 Breakthrough Initiatives Posted on: Breakthrough Initiatives

Thank you for the ideas! It is probably not possible to fill the surface of the light sail with cameras because they would add too much mass and might not survive launch. However, it could be possible to have a sparse array of imaging cameras integrated into the light sail which could be pointed in different directions to reduce the requirements on pointing during a flyby. Given the difficulty in sending data back to the Earth, it would be ideal to have as much on-board processing as possible. The details of this have yet to be worked out.

Prof. Phil Mauskopf, ASU
Starshot Communications Lead

Comments: 23

Pages:

Please sign in to be able to add new comments.