Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm aiming for software to digest the output of a 300fps tethered camera on a drone. Basically an Apertus Axiom beta with a fancier FPGA to use all 64 lanes of the sensor to not have only 150fps, and being able to attach a 40G QSFP+ transceiver to it for data offloading.

Existing software makes hand-picked pictures worthwhile, but I want full-auto behavior.

Near-term I want a construction worker to open the case with the hardware after cleaning his hands, and then showing all insides of the building to the camera, before swapping the storage and getting the data physically back to base for processing.

Also, video capture gets you continuity of motion. Unordered picture collections are horribly expensive to compute with.

I also prefer redundancy to holes in my textures or models ;)

My drone swirls around the buildings, btw. I want VR for walking, not google earth.



Ok, so it's mostly a hardware issue then, a camera in burst mode continuously shooting at a lower FPS should theoretically work just as well.

But indeed, if that works with a video, that's might be easier to use.

You're talking about imu data, do you do geo-referencing also? It might be difficult having a GNSS fix if your target is inside buildings.


The point is that mechanical shutters in useful DSLRs have massive rolling shutter artifacts. You can get video cameras with shutter skew in the low microseconds or even sub-microsecond.

Remember, the exposure has to be short due to motion blur.

Concepts of resonant rotating camera heads that use torsion springs and electronically controlled clutches with a torsion oscillator that advances the camera in a few milliseconds were made, but apart from an ability to capture ~1 Gpixel effective 360° data in 2 seconds (there is overlap due to covering the sphere with rectangles and not changing the horizontal number of images per revolution (oscillation frequency) to reduce overlap at nadir and zenit), the benefits were considered insufficient to warrant more time on it before I actually _have_ the camera. And I'll probably have an fpga that is too bulky to oscillate with it, so I need fatigue-resistant cabling for the 64 + 2 clocks high-speed LVDS lanes between them.

You can get e.g. the Blackmagic pocket cinema 4k that writes 30fps DCI4k DNG with an electronic global shutter to an USB-C SSD.

I have no plans to rely on magnetic field sensors or GNSS. I plan to fuse bundle adjustment with constraints from the raw sensor data. Offset and drift correction for both will be done there, and thus I don't get the typical drift-off issues from cumulating a sensor offset.

I _do_ plan to allow fixing feature markers in space, to handle geo-referencing and potential un-curling at the same time. (Depending on lens distortion you get variously strong tendencies for the area you scan to either reconstruct as a small hollow earth or a small spherical earth (in both cases the average surface curvature radius is up to a few km at most, often even less).)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: