Hacker Newsnew | past | comments | ask | show | jobs | submit | wumms's commentslogin

Writing system [0]: "In a talk at the Oxford University Society of Bibliophiles on 11 May 2009 [1], Serafini stated that there is no meaning behind the Codex's script, which is asemic; that his experience in writing it was similar to automatic writing [2]; and that what he wanted his alphabet to convey was the sensation children feel with books they cannot yet understand, although they see that the writing makes sense for adults. However, the book's page-numbering system was decoded by Allan C. Wechsler and Bulgarian linguist Ivan Derzhanski, as being a variation of base 21."

[0] https://en.wikipedia.org/wiki/Codex_Seraphinianus#Writing_sy...

[1] I could not find a source for that talk

[2] Automatic writing, also called psychography, is a claimed psychic ability allowing a person to produce written words without consciously writing. https://en.wikipedia.org/wiki/Automatic_writing


Wow, weird to see it's a documented phenomenon. I thought it was just me. At times I can talk, write or make sounds in a manner that is not intentional nor voluntary. Though it's nothing psychic, it's just a weird crossover of my ADHD and dissocation.


> Humans generating random numbers

> His primary school principal described him as ‘a happy little fellow who has a clear understanding of the fact that he is different’.

Also, files that are added to .gitignore after they’ve already been committed will still appear as modified. To stop tracking them, you need to remove them from the index (staging area):

    git rm --cached <file>

They mention it in the article:

> Microsoft began to build on their work in 2017. Although Kazansky’s approach maximizes durability and the density of data, in the latest work, Microsoft has gone for practicality. They explore a method that enables data to be written faster and decoded more reliably than did Project Silica’s previous iterations, says Black, and it uses cheaper borosilicate glass, rather than harder-to-make fused silica.

Following your link, I found a prototype of the media storage system (2023) with just 2828 views: https://www.youtube.com/watch?v=xnK-uB4OsgU


Pentagon might ask contractors to certify they don't use Anthropic's Claude (wsj.com)

13 points by fortran77 20 hours ago | 6 comments

https://news.ycombinator.com/item?id=47057294


Current write speed (No read speed given):

    Blu-ray (1×)            ~36   Mbit/s
    MS-Glass (single beam)  ~25.6 Mbit/s
    MS-Glass (multi-beam)   ~65.9 Mbit/s
That's ~7-18 days per 120mm x 120mm medium (4.8TB). Glass prices stable for now. Also, the authors make no statement about horizontal vs. vertical storage.

> No read speed given

Write only medium!


The reading is the done with a high-resolution video camera and the image is processed to extract the data.

This can be easily done many times faster than the writing, which is why the article is focused on the progress that Microsoft has achieved in increasing the writing speed, in comparison with their prototypes from a few years ago. It is also easy to make separate readers that are much cheaper and smaller than the writers.

The most important limitation of this device is the current very high cost of the lasers used for writing. Had they been cheaper, the writing speed could be increased by adding more lasers.

Microsoft argues that if this kind of short-pulse lasers would be mass produced, they could become much cheaper, like it has happened with the many lasers that are used now everywhere in optical fiber communication and with optical discs.

For now. this is a chicken-and-egg problem. This kind of optical storage cannot be converted into a commercial product because the lasers are too expensive and the lasers are too expensive because there is no high-volume market for them.

Even the current level of performance would be enough for myself. If I could afford such a device, I would buy it instantly, to stop worrying about having to buy periodically new HDDs, to migrate my data from old HDDs and to buy periodically new tape drives, to migrate my data from tape formats that become obsolete.


At least it is safe for 10k years! And from everybody ever basically.

Thanks for digging this up. Every "scientists create new storage medium" is always a disappointment when you get to see the write speeds. This seems decent? At least in "raw" numbers there's nothing obviously making this useless. Let's hope they have a path to quick commercialisation and make it available. If there's any DC adoption will be the real test, I think.

First CDs would take hour and a half to write with a laser. Once engineers take over the tech, it will might get faster.

If they get the read speed up to a couple of GBit/s (~100x current max write speed), 4.8TB might be a good fit for 32k movies.

Of course there are people out there watching 32k movies.

Was 4k not enough?

Am I the only one who's still content with 720p?


The display has some bearing on this. Generally, 1080p is good enough but some cinematography benefits from better resolution and as a result, requires a better display.

>This seems decent?

Definitely. If it actually achieves those speeds it's perfectly reasonable for long-term/cold storage.


Depends somewhat on the read speed, too. Extreme example: if that is one bit per year, it doesn’t matter that you can write stuff on it.

I imagine if you can use lasers to etch at that speed, you can use them to read at similar speeds as well.

Write speed is probably the least important metric for people that are considering something like this. After everything with storage and longevity is taken care of, improving write speeds is a nice to have, but not the important part.

> Imagine some future event even more powerful

https://en.wikipedia.org/wiki/Miyake_event


Reminds me of: Man’s Search for Meaning (1946) https://en.wikipedia.org/wiki/Man%27s_Search_for_Meaning


Was rereading that last year. Recommend it to anyone who hasn't.


Sorry, nitpick: 720kB DD 1.44MB HD


It's funny that we've always called them "1.44 MB" disks when they actually hold 1440 KiB, which is 1.41 MiB or 1.47 MB.

"1.44" is a horrible mix of binary kilo and decimal mega which makes no sense.


With the GNU Units program, I have this defined in my ~/.units: "floppyMB 1000 KiB"

Is it useful? Perhaps not, but you can use it to translate "1.2 floppyMB", "1.44 floppyMB" into other units.


Probably because they have 2880 sectors.


++1

:))


> Today we're launching Twin publicly, after a 1-month beta where users deployed more than 100,000 fully autonomous agents. We're also announcing a $10M seed round led by LocalGlobe.

So... 20%?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: