Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem. At small scales it's fine. As systems get bigger, it becomes a problem. The internet went through this. When it started as the Arpanet, there was (if I remember correctly) one guy who kept the directory of names for each system on the network. The network started small, so this could work. As it grew into the thousands of nodes, this became less manageable, partly because there started to be duplicate requests for the same name for different nodes--naming conflicts, which is why DNS was created, and why ICANN was ultimately created, to settle who got to use which names. I doubt something like that, though, would scale properly for code, though many organizations have tried that, by having software architects in charge of assigning names to entities within programs. The problem then comes when companies/organizations try to link their systems together to work more or less cooperatively. I heard despondent software engineers talk about this 15 years ago, saying, "This is our generation's Vietnam." (They didn't lack for the ability to exaggerate, but the point was they could not "win" with this strategy.) They were hoping to build this idea of the semantic web, but different orgs. couldn't agree on what terms meant. They'd use the same terms, but they would mean different things, and they couldn't make naming things work across domains ("domains" in more than one dimension). So, we need something different for locating things. Names are fine for humans. We could even have names in code, but they wouldn't be used for computers to find things, just for us. If we need to disambiguate, we can find other features to help, but computing needs something, I think, that identifies things by semantic signifiers, so that even though we use the same names to talk about them, computers can disambiguate by what they actually need by function. It wouldn't get rid of all redundancy, because humans being humans, economics and competition are going to promote some of that, but it would help create a lot more cooperation between systems.


The thing about DNS and naming is that there were a lot of ideas flying around, some of them in the big standards committees. X.400 and X.500 were the OSI standards for messaging and directory services that were going to handle finding entities using specific attributes rather than with direct names or even straight hierarchical naming (like DNS finally used). It's interesting to read all the old stuff -- I had to sift through much of it a few years back when I wrote my dissertation on the early history of DNS (a cure for insomnia).

I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.


> I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

You could slowly bootstrap a new system on the existing one, but you'd need a fleshed out design first :) Everything is replaceable, IMO, even well established conventions and standards, if something compelling comes along.

The CCN ideas are related to naming as well. Maybe the ideas could be extended to handle 'objects' rather than just 'content'.


Hardware architecture is the horizon of my knowledge. But one thing I've always wondered is this: why not just have memory addresses inside a computer map to local IPv6 addresses, then have some other "chip" that can distinguish between non local IP addresses that would, in a perfect object world, point to places in memory on another remote machine?

Obviously there would need to be some kind of virtualization of the memory but hopefully you get the idea. Not exactly related to naming but whatever.


Interesting - is the broader idea here that there is a virtual machine that spans multiple physical machines? Instead of virtual 'memory access', why not model this as a virtual 'software internet'?


I don't even think you need a VM, really. Just have this particular computer equipped with some soft-core that handles IP from the outside. The memory mapping, since it's just IPv6, can determine whether you are dealing with information from the outside world (non-localhost ips) or your own system (local ips). Because logically they are already different blocks in memory, they're already isolated.

With something like that you might be able to have "pure objects" floating around the internet. Of course your computer's interpretation of a network object is something it has to realize inside of itself (kind of like the way you interpret the words coming from someone else's mouth in your own head, realizing them internally), but you will always be able to tell that "this object inside my system came from elsewhere" for its whole lifecycle.

Maybe you could even have another soft-core (FPGA like) that deals with brokering these remote objects, so you can communicate changes to an incoming object that you want to send a message to. This is much more like communication between people, I think.


> I don't even think you need a VM, really.

I mean a VM as in the idea that you are programming an abstract thing, not a physical thing. Not a VM as in a running program. You could emulate the memory mapper in software first - hardware would be an optimization.

The important point is 'memory mapper' sounds like the semantics would be `write(object_ip, at_this_offset, these_bytes)`, but what you really want IMO is `send(object_ip, this_message)`. That is, the memory is private and the pure message is constructed outside the object.

You still need the mapping system to map the object's unique virtual id to a physical machine, physical object. So having one IP for each of these objects could be one way.

Alan Kay mentioned David Reed's 1978 thesis (http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20...) which develops these ideas (still reading). In fact, a lot of 'recently' popular ideas seem to be related to the stuff in that thesis (e.g. 'psuedo-time')


> One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem.

I don't think naming itself a problem if you have a fully decentralized system. E.g., each agent (org or person) can manage their namespace any way they choose in a single global virtual namespace. I'm imagining something like ipfs/keybasefs/upspin, but for objects, not files, and with some immutability and availability guarantees.

But yes, there should probably be other ways to find these things, using some kind of semantic lookup/negotiation.


What he means is that names are a local convention, and scaling soon obliterates the conventions. Then you need to go to descriptions that use a much smaller set of agreed on things (and you can use the "ambassador" idea from the 70s as well).


A few year ago when I was doing some historical research about DNS, I came across quite a few interesting papers that all discussed "agents" in a way that seemed based on some shared knowledge/assumptions people had at the time. In particular, these would be agents for locating things in the "future internetwork". There's a paper by Postel and Mockapetris that comes to mind. Is this an example of "ambassadors"?


Ah I see, we're taking about interoperability, not just naming.

This is related to my original interest in language structure words and ontologies. The idea there is that the set of 'relationships' between things is small and universal (X 'contains' Y, A 'is an abstraction of' B) and perhaps can be used to discover and 'hook up' two object worlds that are from different domains.


Yes, this got very clear in a hurry even in the ARPAnet days, and later at Parc. (This is part of the Licklider "communicating with aliens" problem.)

Note that you could do a little of this in Linda, and quite a bit more in a "2nd order Linda".

I've also explained the idea of "processes as 'ambassadors'" in various talks (including a recent one to the "Starship Congress").




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: