Hacker Newsnew | past | comments | ask | show | jobs | submit | nateb2022's commentslogin

I find it more amusing that the benchmarks claim 530 GB/s throughput on an M1 Pro which has a 200GB/s memory bandwidth. The 275 GB/s figure for chained transforms has the same problem.

I suspect the benchmarks, if not most of this project, was completely vibecoded. There are a number of code smells, including links to deleted files, such as https://github.com/jasnell/new-streams/blob/ddc8f8d8dda31b4b... an inexistent REFACTOR-TODO.md

The presence of COMPLETENESS-ANALYSIS.md (https://github.com/jasnell/new-streams/blob/main/COMPLETENES...) isn't reassuring either, as it suggests the "author" of this proposal doesn't sufficiently understand the completeness of his own "work."


Regarding the benchmarks, "Async iteration (8KB × 1000): ~530 GB/s vs ~35 GB/s": how do you achieve 530 GB/s throughput on an M1 Pro which has a 200GB/s memory bandwidth? The "~275 GB/s" figure for chained transforms has the same problem.

I suspect the benchmarks, if not most of this project, suffer from poor quality control on vibecoded implementations.


For single-use FPV munitions, a $200 LiDAR probably wouldn't be very useful. For one, that's about half the price of the drone itself, and second, for self-destructing munitions, it's not really necessary to have anything more than video footage to direct it.

Low cost, sub $200 automotive grade LIDAR sensors are already available.

Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point [2]. These feature a 120°(H) x 90°(V) FOV at 50m, with 2.7M points per second sampling.

Velodyne introduced Velabit in 2021, for $100. Boasting 100m range and a 60-degree horizontal FoV x 10-degree vertical FoV.

The article claims that:

> What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.

which is simply not true. Cepton (currently offering) and Velodyne (acquired by Ouster in 2023) have done this for years.

  [0]: https://www.cepton.com/products/nova
  [1]: https://www.cepton.com/products/nova-ultra
  [2]: https://www.cepton.com/announcements/ceptons-nova-lidar-named-as-ces-2022-innovation-awards-honoree
  [3]: https://lidarmag.com/2020/01/07/velodyne-lidar-introduces-velabit/

99% of LiDAR production is just 4 Chinese companies. Yes low-range systems are already at the $150-300 range, but MicroVision is promising to produce this in the Washington.

Basically they're saying "we can catch up to China by 2028/2029" ||so please subsidize us||


>Cepton Technologies offers Nova [0], Nova-Ultra [1] sensors both at a sub-$100 price point

Where? How? I'm only seeing the Nova on ebay for between $4000 and $5000.


Cepton primarily operates B2B, as B2C demand for specialized LIDAR like this is pretty low. Anything you find on eBay is either a leftover dev kit or salvage. This is pretty much the case for MicroVision, Ouster etc.

I'm sure I'm not the only one hesitant to provide a 3rd party virtually MITM access to both my LLM usage + API keys. If this were capable of running locally, or even just an API for compressing non-sensitive parts of a prompt, I think it would be much easier to adopt.

Hi! You only need our API for the compression part — API keys and LLM usage are entirely managed by your own application. We don't have access to your SaaS, and we don't even know its name. We simply receive the text through our API, compress it, and return the response to your app. Your LLM — whether local, OpenAI, Claude, or any other — then processes it using your own API keys. Your data stays safe with you. And we NEVER ask for your LLM API keys. Let me know if you have any question :)

Wouldn't the example code:

  from openai import OpenAI

  client = OpenAI(
      base_url="https://agentready.cloud/v1",     # ← only change
      api_key="ak_...",                           # AgentReady key
      default_headers={
          "X-Upstream-API-Key": "sk-..."          # your OpenAI key
      }
  )

  # Every call is now compressed automatically
  response = client.chat.completions.create(
      model="gpt-4o",
      messages=[{"role": "user", "content": your_long_prompt}]
  )
provide you our OpenAI key (via the X-Upstream-API-Key header)?

You’re absolutely right, and that’s a fair catch thank you so much. The example code contradicts what I said.

The cleaner architecture — and what we should have shown — is a two-step approach where our API only handles compression, and your key never leaves your environment:

# Step 1: call AgentReady only to compress import requests

compressed = requests.post("https://agentready.cloud/v1/compress", headers={"Authorization": "ak_..."}, json={"messages": [{"role": "user", "content": your_long_prompt}]} ).json()

# Step 2: call OpenAI directly with YOUR key — we never see it from openai import OpenAI client = OpenAI(api_key="sk-...") response = client.chat.completions.create( model="gpt-4o", messages=compressed["messages"] )

This way AgentReady only touches the text for compression — never your LLM API key. We’ll update the docs and example code accordingly ASAP. Thanks for pushing on this.


That endpoint https://agentready.cloud/v1/compress endpoint doesn't exist, I get a 404. Your entire response is just hallucinated AI text at this point.

I apologize for the confusion. The /v1/compress endpoint hasn’t been deployed yet. We’re pushing it to production asap. Following your suggestion, we’re also moving the compression step closer to the client side to minimize exposure of sensitive data. We’ll update the docs accordingly. Thanks for the sharp eyes :)

Google is dealing with a wave of abuse over its Antigravity IDE, with 'account switching' tools designed to use a ton (20+) of free or pro accounts, giving the user essentially unlimited usage. I'm guessing they've deployed some rather aggressive countermeasures to stop this, including banning clients that seem to be accessing "private" APIs outside of a Google product.


Personally I find the unavailability of more widespread fast chargers the main issue, especially on roadtrips.



[dupe] https://news.ycombinator.com/item?id=46681153 (21 hr ago, 92 pt, 58 comments)


44 refers to Fedora version 44


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: