Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Firewalling your code (lackofimagination.org)
124 points by tie-in on Aug 27, 2024 | hide | past | favorite | 78 comments


I think the general concept here is putting in place restrictions on what code can do in service of making software more reliable and maintainable. The analogy I like to use is construction. If buildings were built like software, you'd see things like a light switch in the penthouse accidentally flushing a toilet in the basement. Bugs like that don't typically happen in construction because the laws of physics impose serious limitations on how physical objects can interact with each other. The best tools I have found to create meaningful limitations on code are a modern strong static type system with type inference and pure functions...i.e. being able to delineate which functions have side effects and which don't. These two features combine nicely to allow you to create systems where the type system gives you fine-grained control over the type of side effects that you allow. It's really powerful and allows the enforcement of all kinds of useful code invariants.


> I think the general concept here is putting in place restrictions on what code can do in service of making software more reliable and maintainable. The analogy I like to use is construction.

The concept is quite old, and it's called software architecture.

All established software architechture patterns implicitly and explicitly address the problems of managing dependencies between modules. For example, the core principle of layered/onion architecture or even Bob Martin's Clean Architecture is managing which module can be called by which module.

In compiled languages this is a hard constraint due to linking requirements and symbol resolution, but interpreted languages also benefit from these design principles.


The goofy thing is that “software architecture” was killed by YAGNI dogma and yet the need for properly layered code hasn’t disappeared, so people are inventing tooling to enforce it.


Offtopic, but this reminds me of a plausible tech support gore story.

An office was experiencing random Internet outages and they were struggling to figure out why. They traced it back to their router rebooting randomly. Tracing it back further, they found the outlet was experiencing big voltage drops. They then realized it was on the same circuit as a pump used to flush a porta potty for a construction team onsite. Everytime they'd flush the toilet, the router would lose power and reboot.


I agree, and in fact that's the basis of my Haskell library Bluefin[1]. If you look at it from one angle it's a Haskell "effect system" resembling other Haskell approaches for freely composing effects. If you look at it from another angle it's a capability-based security model (as also mentioned by quectophoton in this dicussion[2]). There's actually quite a lot of similarity between the two areas! On the other hand it's not really a "firewall" as described by this article, because it doesn't do dynamic permission checks. Rather permission checks are determined at compile time. (Although, I guess you could implement dynamic permission checks as one of the "backends".)

[1] https://hackage.haskell.org/package/bluefin-0.0.6.1/docs/Blu...

[2] https://news.ycombinator.com/item?id=41366856


ah, you say that, but with wifi light switches and wifi toilets it's easy to connect those together nowadays!


We're using similar approach in PHP application by facilitating https://github.com/spaze/phpstan-disallowed-calls

In essence we have defined within each domain: a) Public folder with code that other domains can use b) domain folders (src and infra) with code that only given domain can use. This way developers know not to change public contracts for a (or if they do change them they do understand they're changing public code) be it method signatures or interfaces and are free to refactor b, because these classes should not be publicly accessible and can change at any time. Even extending classes defined this way is disallowed.

This becomes helpful when operating within confines of monolith application, but with different teams owning different parts of the application. Trying to use non-public part of each domain will be prevented on commit level (developers will not be able to commit their work) rather than run level though


Letting a static analyzer do these checks is a sane approach! The runtime detection of code files is really weird and would fail with bundling etc.


Runtime is important if security matters. Assume an attacker has already compromised your program - how much damage can they do? If you check at runtime you can prevent the compromised code from calling your functions. Well maybe, we don't know what code was compromised or how it was, but many compromises are a buffer overflow that only runs a few hundred bytes of code (overflow more than that and you clobber something else important to the attack and so the whole fails) and so the attacker needs to quickly call some sensitive function so if that sensitive function is checking the attack can't do anything.

I've been thinking about the above for a while, but I have no yet figured out how to put it into practice. I'm also not sure how much value it would have against a real world attack. It seems like it should work but security is often weird.


Thinking about this scenario I'd suppose disabling features would be the way to go - disabling individual methods doesn't look feasible and I imagine that it should be done at the configuration level (what I mean here is that you need an easy and quick way to switch the feature flag, rather than recompiling/redeploying). But then the code would have to be built as well to support such feature.


Yes :) but given that OP does the checks at runtime thought I'd give that disclaimer


It’s probably over the top, although I respect the intent.

The quickest way to destroy “velocity” is by introducing dependencies between implementation details with no barriers in code.

The more constraints you have to satisfy when you make changes, the more effort it is to make the changes (provably); and thus, the more time it takes, destroying velocity.

That said, doing this as described is probably overly dramatic; I like the rust model: by default, parents can access child scopes, and children can access their immediate parent. Otherwise you have to explicitly “pub” a symbol even to use it in the same crate, which is the escape hatch for pragmatism over strictness.

It would be lovely if some other languages (js, python) had such a delightful module system, but, they don’t.

With something as fundamentally undisciplined (“flexible”) as js, you need to enforce the rules pragmatically with process and tools like code reviews and linking.

It is worth doing though; this is one of my favourite architectural topics because it’s so easy to totally destroy anyone who tries to argue the point. :)


Do you mean that if the constraints are visible when you change the code, it's better? But "hidden" constraints which will manifest only under specific conditions under runtime are bad.


I suppose.

Abstractly:

A constraint is an invariant that must be true before and after the change.

It doesn't matter if they are 'hidden' or 'visible'; if you must maintain the existing behavior, then you have to prove that no constraints are violated by the change you make.

The effort you have to expend is proportional to the number of places where you have invariant that must be maintained.

However.

If you isolate constraints in groups (call them crates, modules, whatever you want), then you can create a region which has no constraints except at the boundary where you interact with a defined API.

Since the volume is always >= the surface, you can prove that having the boundary is strictly <= effort to maintain than not having it.

Concretely:

If you can prove (because it is impossible due to the compiler rules or whatever manual process) that you have no dependencies on the internal functions inside a particular namespace / module / etc. because the only way it is ever possible to call those functions is via a specific public API.

...then you are not constrained in changing / refactoring / removing / whatever you want.

The internal workings have no external dependencies, no callers.

It is therefore less effort to maintain that code; because any change does not require you to go hunting around for someone calling helpers.internal.finance.convertCurrency for it's view model in a completely different domain.

If you want to call that 'hidden' then sure.

I'd probably say 'different view', but if basically yes, the point is that of the entire set of constraints for the system, you can say, today, our work is in this area of the code and we have a much much smaller set of things we have to worry about while we make changes here.

It's not rocket science; you see people doing this all the time; 'It's just a few tests, it can't break the database'.

Correct. The tests are a separate package that strictly the database schema has no dependencies on.

Therefore it is quick and easy and safe to make changes to them.

...

If your project only has two domains; 'tests' and 'non-tests', you literally doing the same thing the OP is suggesting, just at a slightly reduced scale.

In bigger, more sophisticated projects, the number of mini-domains inside a single project is usually > 2; but for exactly the same reasons.


> Since the volume is always >= the surface

You'll find out that analogies to Euclidean geometry do not hold for software structure in general.

That's a nice heuristic though. Just be wary that if you follow it blindly, you will inevitable optimize things into a counterexample where everything breaks down while your proof still finds it's the optimal.

Static analysis is good, in moderation; dynamic validation is good, in moderation; accepting errors and dealing with them is goo, again, in moderation. All of those things turn bad if you do them out of dogma.


> Static analysis is good, in moderation;

Care to point out your best example on when static code analysis ceases to be good?

I don't think there is possibly any copout to justify away the benefits of static code analysis. Either your code works by complying to the interface, or it doesn't and violates interfaces. The only thing static code analysis does is force you to acknowledge the real interfaces.


Hum... You mean you've never encountered complex interfaces dictated by their types?

I was about to make a Haskell joke, but it's actually Java that is most famous for this. There are lots of "enterprise Hello-Word" published on the internet if you want some done on purpose.


> Hum... You mean you've never encountered complex interfaces dictated by their types?

I asked you what you personally believe is your best example on when static code analysis ceases to be good. I fail to see how anyone's opinion or personal experiences determine your own personal opinion on a subject.

Can you provide any example on how static code analysis can be anything other than a good thing?

> There are lots of "enterprise Hello-Word" published on the internet if you want some done on purpose.

I'm sorry, what does this have to do with static code analysis?


Sure, maybe the metaphor doesn't make perfect sense.

...but, strictly, if you have a module with K public functions and P private functions the maximum possible number of unique external dependencies you have to track on the module is K.

If P > 0, then the 'api surface' of the module is smaller than the total set of functions.

If K = 0, the module has no external dependencies and you can do whatever you want.

I'm going to say with complete confidence that creating 'boxes' in your software where you minimize K creates maintainable software.

When K = P, you have a dumpster fire.


I dont know if thats what he is saying, but what you are saying sounds logical.


> The quickest way to destroy “velocity” is by introducing dependencies between implementation details with no barriers in code.

Nonsense.

The biggest velocity killer is having to deal with the technical debt left behind by those who mindlessly commit changes that turn your software into a big ball of mud.

Ask yourself this very simple question: why did your team felt the need to add these constraints? What classes of problems they were avoiding by preventing specific types of changes to the software architecture from being applied? What problems they experienced earlier that motivated them to ensure they wouldn't experience them again?

And why should your laziness to do things the right way take priority over avoiding making the same mistake?


modules? crates??? uhmm... namespaces???

I think you're conflating the language design stuff with the tooling (packaging, and calling that 'modules') ....to be fair, you're conflating but I'm fanboying python

in any case, the point being there's a distinction between an linker and a compiler but not in interpreted languages. so comparing rust's crates with JS, python (even java) is an instance of apples-to-oranges comparisons


Maintaining boundaries in your code is something you can do in any language, regardless of built-in support in the language or tooling.

Ironically, untyped languages like js and python are particularly prone to the problems from poor unstructured code, because you can't use static type checking to find broken dependencies at compile time.

/shrug


Can you explain how types would help with code structure? From experience, even with typed TS nothing prevents developers from creating a big ball of mud


Always feels like you're in a dysfunctional place when you have to program this defensively.

I had an acquaintance who was writing library code for a few dozen data engineers in python, and she had to resort to locking down private methods by checking the call stack after engineers repeatedly got hold of private objects, or objects that are only there sometimes (e.g. when not running clustered).

I adopted a similar stance of "you can't abuse what you can't access" in platform engineering too, but I am not the greatest fan of having to do this in the first place. But the alternative always seems to be that someone will change the scope of what's supported by you for you, as soon as someone builds a dependency.


There are multiple reasons to want this, some are dysfunction, but others are useful.

Every time you use a function from elsewhere you become coupled to that function and how it works. That means that if however maintains that function wants to make a change someone must be responsible for keeping your code working with the change. (this can be handled many ways - all too often this customers get mad because a feature doesn't work right and they can't complain to anyone). If you put control over who can use what you can prevent a lot of issues.

In a compiled system changing an interface means more code to recompile, so control over who can use what reduces build times.

If you know some function is for internal use in one place you won't spend time designing a good interface. If the function is for use by everyone you should spend more time making it a nice interface that is easy to use correctly. Every once in a while I design something for internal use like that, but someone external discovers it is useful and they are using a bad interface that meets my needs but isn't quite right for them.


"I monkey patched your code and now it doesn't work" would be a deeply irritating bug report. It's directly equivalent to "I forked your codebase, changed the text files, and now it doesn't work".


For some reason, recipe bloggers get "I substituted this ingredient with <completely different thing> and got something bad" a lot.


At least there it is useful sharing - it warns others from the substitution. There is also "I substituted this ingredient with <completely different thing> and got something great" which I see happen a lot more often. In cooking it is common to be missing some ingredient and so you want some substitution if it will turn out.


This was the kind of place where data engineers would ask an LLM for config options (for the framework that was built internally) and then complain they didn't work. No idea how these people got by and even made more than the person writing the framework.


Were these leaky abstractions built on top of other libs?


Databricks, so it's hard to avoid.


I've worked like this for decades.

Layers, with each layer assigned a particular domain and API restriction (for example, I have a multi-layer backend, and, if I want to access the database directly, I need to implement that at the very lowest layer, and then set up a "tunnel" of access to the top-layer exposed API, through the intervening layers, applying whatever access control and filters are appropriate for each layer).

C++, if I remember correctly, had a lot of attributes you could assign to classes and types, to regulate access, but it's probably been around 20 years, since I've written C++.


AFAIK, the only C++ access specifiers are private, protected, and public. Hardly an "insane number".


What about "friend"? Do they still have that?

But you have a point. It's a bit overblown, and I'll tamp down the rhetoric.


The problem is these all assume you are not under attack. If you are an attacker you can ignore all that.

The C++ ABI is known (not to be confused with defined!) on my systems, so if you have a pointer you can add an offset to find any private member and then modify it (even if const you can change it - though the code may not read the change). You can also call any private function from anywhere because how the function is name mangled can be figured out. These tricks are undefined behavior in the C++ standard, but they will work in any implementation if you know how your implementation works under the hood - they will not work on a different implementation, but you can make a different version for different implementations.

The above tricks work for most languages, though the details are different for each.


I love how the comments are split along the lines of, "wait, some people don't do this?" and "this is a maintenance nightmare!"

If you know, you know.


> this is a maintenance nightmare!

Yeah, it's a pain to maintain, but it is damn secure.


Assuming reasonable factoring, maintenance is only a problem when trying to seriously cheat the layers, eg your controller wants to modify the template engine’s internals for a single request. Better to just be honest about it by punching a hole in the abstraction that is appropriately named and using that.


No cheating!

For example, I stop creating SQL, after the first layer above the DB access layer. Everything above that, is a function-based abstraction, so there's no SQL, in most of the stack.

Some layers are meant to introduce "replacement points," so you could swap out the SQL for a NoSQL solution, if you want.

Most layers have a fair bit of validation and permission-checking, etc.

But making sure that I stick to the rules for each layer can be a pain. I stick to them, anyway.


Yes, this is the way. But I'm fine with short-term cheating when there's no other way because it sticks out a lot more:

"wtf, why is the registration endpoint executing hand-written SQL but no other endpoint is?"

The fact that it ends up looking conceptually ugly compared to everything else means it is harder to ignore, and people have to justify their sloppiness when getting it merged.

> But making sure that I stick to the rules for each layer can be a pain. I stick to them, anyway.

It can be. I've found you usually internalize the rules eventually and your mind naturally slots things where they need to go. Then you can aggressively add features/refactor because the surface area is much smaller.

It's like learning to work with strong type systems versus fighting them.


> Layers, with each layer assigned a particular domain and API restriction

Yeah, that's not a good way to do it.

You can only keep adding layers if they are platonic. If they represent any real-world domain, they will interfere with each other, your abstraction will leak, and every change will require changing everywhere.


> Yeah, that's not a good way to do it.

Works for me.

I've been working this way for a lloooonnnnggg time.

Of course, it requires strict Discipline, and does add complexity. If you stray outside the lines, you can make a fearful mess.

I don't stray outside the lines, and it works great.


For compiled languages, bazel has "visibility" which enables this https://bazel.build/concepts/visibility


This concept has been around for ages. I remember way back in a previous life as a Java dev having a framework where you could annotate methods with permission requirements, and the framework would add runtime instrumentation to ensure the proper context was created in the current thread and had the proper permissions to invoke that method any time it was called.



The ideas and conclusions remind me of OpenBSD’s pledge too. They privilege separate a lot of their in house software so the a bit that talks to the internet is separated as much as possible from the bit that does parsing, for example. But that was a just best practice they tried to follow with no way of enforcing it. Once pledge was created they found violations as programs started being killed for using disallowed syscalls. For their use capability based security enhanced and proved something they did already just informally.


Yep!

And more specifically, 'membranes', which JavaScript was extended with the 'proxy' construct to support: https://tvcutsem.github.io/js-membranes . The types world came into these ideas as scheme's higher-order contracts (dynamically enforced), such as runtime checking gradient types.

Playing those primitives & ideas out, we made library-level access control policies here that we called object views (at Google, part of caja), and more natively via browser extensions as aspect policies (conscript, at MSR, sort of like hooks for CSPs)

JS is very dynamic, so writing unhijackable policies was quite hard. Imagine being careful about every getter, having to pre-freeze every util/stlib, and trusting no libraries.

What's old is new again: LLM OS's want to give AI's access to everything in an app as tools, so either very little gets exposed, or we walk back into problems like these.


Capabilities is what you get if you keep traveling down this road, deal with the resulting issues, tune the approach, and iterate on it a lot. It doesn't take long to figure out that partitioning permissions by "what functions can be called" is a nice start, and I don't mean to criticize it as a start, but it is only a start. It isn't the correct dimension of the code to cut on.

Capabilities is, in my opinion, the biggest collective blind spot we have in the programming world right now. Bigger than functional programming, bigger than proofs; those may be useful but there is some general awareness they exist. Capabilities are almost unheard of. If someone was setting out to put their mark on the programming language landscape, finding a way to integrate capabilities into a practical programming language would be something I'd have towards the top of my list. I find it frustrating how many "new" languages are just current languages respelled a different way. It has been a while since I've seen a language try something new.

Or, in this case, new-ish. I'd suggest studying E a bit first: https://en.wikipedia.org/wiki/E_(programming_language) and maybe finding some programmers of it and speaking to them about it. But at this point, a language from 25+ years ago with no live community (I scanned over it quickly, there's a Wiki whose "recent changes" shows no changes and a mailing list with an approximate rate of 1 message a quarter or so) means it's going to be so far behind that you might as well start from scratch.

We keep sort of kind of recreating it a bit but it's really hard to bodge on to a language as a library.

Of course, this may be as pie-in-the-sky as expecting everyone to use proof languages. Here I am pitching this and it's an uphill battle for me just to get people to use a Username type instead of a bare string in the real world. Still, the world has moved on in the last 20 years... perhaps the time is right.

(On the off chance that someone ever does decide to build a capability-based language, I would suggest putting a lot of thought into transparency and diagnostics of the capabilities, because even in a super-mega-alpha-test "function call failed: missing capability" isn't going to be a very useful error message. For instance, I would offer the suggestion of, if some code tries to do something at runtime but it doesn't have the capability, in the process of generating the stack trace tell me the last function call that did have the capability, or if the program never had it in the first place. If one can prove it at compile time so much the better, though I don't know how far you can take this at compile time. Also be careful piling too many trendy other type features on top; along with the fact capabilities can arguably replace some other type features, you're going to have enough work to start out with without also trying to be the language with the super-strongest typing as well.)


I really wish Microsoft had gone forward with their Midori project, or at least open-sourced it; they apparently put a lot of work into designing a capability system, both in language design and OS architecture. https://joeduffyblog.com/2015/11/10/objects-as-secure-capabi...


Pony has caps built in: https://www.ponylang.io/


Austral looks really cool for a modern programming language built around capabilities: https://borretti.me/article/introducing-austral


How can you ensure that the modules follow these rules? [...] As a proof of concept, I’ve created a Node.js library called firewall-js using JavaScript proxies.

Unless I misunderstand, there's a vastly simpler way to do this:

- use a monorepo;

- put each module in its own package.


This seems pretty neat for when your layer separation is merely by folders in JavaScript.

It's unfortunate that this doesn't highlight any problems until the code is run as I feel it's always best to find out as soon as possible you're making a mistake, your IDE instantly telling you of a problem as you try write the code is the most ideal time, with the next being compilation. In this case probably even a simple integration test would instantly let you know, which isn't terrible.

There is a run time performance impact too here, but I would expect its cost is reasonable for the value it adds.

In the C# space (and I imagine many other languages have equivalent options), layer segregation is a good reason to split your code into multiple assemblies (or projects) and the compiler can then enforce the layering. Of course we sometimes also split things up to into different assemblies for other reasons too, but for any non-trivial sized code base, I consider that one assembly per layer is the bare minimum.


You can even do something akin to this package with the "InternalsVisibleTo" property on the assembly, so you can expose your internal classes to other specified assemblies. For non-C# folks, Internal classes are normally visible only to other classes within the same assembly. This is very useful for building a tightly controlled public API while maintaining a testable code-base.

This option is typically used for exposing your internal classes to a unit test project, but can also be useful for exposing to LINQPad, and could be used like the above to hand out control to other assemblies.


This is basically what java modules do, right? From java 9 and onwards.

You have public stuff within your module, that other packages in the module can use. But another module cannot use it without you declaring it as part of your actual public facing API. I don't see people use it much in application code, though. Often just a single module where people can call everything.


Yup, same experience here.

Java modules took a long time to land, but now, it looks like the whole exercise for making the JDK more modular. Haven't seen any use in the application code

Even things like Spring Modulith don't use Java modules


it seems like a good direction but wrong layer of abstraction. have a look at cloudflares workerd architecture with nanoservices and capability based permissions. you can build all this on a runtime level where services are not even allowed to access the internet or any file except for explicitly configured bindings. these bindings can contain logic too so an egress service could also contain logic for filtering or rewriting etc. this is so powerful and still underhyped


Neglected topic IMHO don't trust other parts of your code base. I wrote about how to firewall your code with Casbin in in Go

https://www.inkmi.com/blog/simple-example-casbin-rbac-abac-o...


There are libraries for writing Architecture tests for .Net and java (that I know of) for enforcing architecture design. You can enforce reference rules for classes and namespaces, like: classes from the domain namespace cannot reference the API or DB namespaces.

I haven't used them in a real project though.


This is why I like to use https://www.archunit.org/ in my Java apps. It allows you to test how the application is structured and being unit tests, they will verify that with every build.


Slightly tangential, but I've found importlinter neat for restricting what layers in your Python code can call each other (in this case by restricting imports, not calls): https://pypi.org/project/import-linter/

For example, you can ensure lower-level packages don't import higher-level ones:

  [importlinter:contract:my-layers-contract]
  name = Layered architecture
  type = layers
  layers =
      mypackage.cli       # can import anything
      mypackage.services  # can't import from mypackage.cli
      mypackage.models    # can't import from .cli or .services



That `userService` example is just awful, it feels like it's trying to achieve so many things with exactly the worst options.

- Not using ES modules which is compounded by: - Creating a random "bag of crap" as I call it by throwing all the functions into an exported object - Can't even get static build time feedback about the functions you're calling or if they even eixst which you would if:

The better alternatives here are:

- Just have plain exported *functions* in an ES module - If you need shared state both functions access then export a plain class in an ES module


What an curious approach - seems like it would be a mess to maintain in a larger project, but the concept of controlling who/what can call the public functions is an interesting idea.


I have a somewhat different approach, using a multi-repo system and some packaging rules I can make some interfaces not available outside of my repository, and still others only available to a subset of the repositories.

there are pros and cons of multirepo vs monorepo. This is one interesting side effect of the multirepo approach.


I would think Lint rules are better for encouraging this and has no run time implications. Easy escape hatches as well if you really want to crack the firewall to ship something urgently.


Looks kind of like OpenBSD's pledge(2) and unveil(2).

I have used both and I personally think those calls are the best security mitigations in our field :)

I would love to see something as simple in Linux.


I like the idea but I think its in the wrong place.

They're kind of like contract statements for preconditions/invariants but at the design layer, enforced at compile / run time.

I think this is useful for architect types, though I think this belongs more around the DependencyInjection config area.

A class should not be aware of what uses it.


IMO the better way to handle this is to statically determine the Dependency Structure Matrix of all modules and to have a linter in the build process that checks that the dependencies conform to the restrictions you want to impose.


The idea (as I understand it...) is to structure the codebase as a directory tree and have position in the tree determine whether some module can call into another or not. Something along the lines of a function can call into one defined in source at the same level or below, but not above.

I did that for a while in C. It does sort of work. Layout amounted to:

  foobar.h
  foobar/foobar.c
  foobar/misc.h
  foobar/misc.c
  foobar/submod.h
  foobar/submod/submod.h
  foobar/submod/other.c
The scheme there was #include to get at peer or submodule source, where not using "../" in the path avoided trying to access parent modules.

Within a simple header/source pair make the declared functions global and all the others static. For the header/directory pair, compile the source files, link them, internalise the symbols that aren't external in the file with the name matching the directory.

I still broadly like that approach. The llvm-link && internalise applied at submodule scope has the side effect of optimising the modules individually. You can start with a header/source pair and convert it to a header/submodule pair without disturbing the rest of the codebase.

What doesn't work so well, and also isn't addressed in the "firewalling" post, is that inevitably the dependency tree wants to pick up a cycle. Some dependency of some module looks like it would be useful to another module and include "../../foobar/misc.h" suddenly appears in the scheme and promptly fails to link because misc.c's symbols are hidden inside foobar.

The "fix" is either to abandon this scheme, start putting functions as static inline in header files as a "temporary" workaround, or to move the multiply-used dependency up through the tree until it is accessible to everything that wants it. "Misc" is prone to ending up at the top level.

I think that pattern died when I disabled the internalise step to help debug something where I wanted to pull in code from elsewhere and then never re-enabled the internalisation. It still seems like a broadly good idea but it does shine a clear light on the dependency structure of the codebase which obstructs the natural descent into a ball of mud. These days I think I'd want the internalise / strict separation enforced in CI builds and not in dev builds (much like unused variables and similar).

Nice to be reminded of that pattern. Thanks for posting the site.


How is this different from using access modifiers (public, private, protected)?

P.S: I do work with code (reading, writing and breaking it), but I’m not a software engineer.


It's more granular than access modifiers. You may want to be an interface to be accessible from outside but not from every caller.


Alternate title: How to terrorize codebase maintainers 5 years after your departure.

Maybe one step further would be to leverage a lot of encrypted stored procedures in SQL.


So.. JavaScript is slowly reinventing OO class and module access controls?


sounds like ABAC on service level, go check https://casl.js.org/v6/en/


So now when you want to use a dependency, you have to update that dependency?

Isn't that a maintenance headache which doesn't really stop people doing that anyway?

I understand the desire for this, having long and messy dependency chains can completely grind large changes to a halt. I'm currently dealing with a .net solution of ~300 projects with a dependency tree that's better described as a dependency web.

But this feels like the wrong solution. Better isolation can be achieved by proper packaging and leveraging of private package repositories.


[flagged]


Except that any coding 'rule' not enforced by a tool is likely to be ignored once a team is big enough. We have such kind of checker at work, it is a good thing provided you can add (justified) exceptions




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: