Hacker Newsnew | past | comments | ask | show | jobs | submit | dankoncs's commentslogin

I don't want to derail the parent thread, so let me ask you: what job skills are "low risk" for you? Can you give a couple of examples, please?

Thank you! :)


Sales is the lowest risk job tech related skill I know of; it works even if we were hit by an apocalypse and had no electricity. In all likelihood, we'll still be making tools and selling those tools. It helps you to think in the problem/business domain too. So you're not simply learning PHP to build a website, you're doing it to say, help people find hotels.

It works on your resume, it works for job searching, it helps with getting promotions (proving yourself useful), and it works with landing a higher salary without negotiation. You still can't sell crap. So you need a good product (i.e. yourself, your skill) worth selling. But it complements any other skill you have.

There's also communication skill in general. How to refine ideas into denser, cleaner formats that people and AI can understand. This works for sales, and it works for code.

Tech-wise, I can't say. The nature of tech is that it's very high risk. If you've had a look at how GPT-3 is engineered [1], it makes you question whether algorithms and OO will be a useful skill in the future. Sam Altman expects us to hit AGI within 2025 [2]. He's probably being a little optimistic, like every other programmer, so let's double the estimate and say we have until 2030. Codex itself scored #96 when pitted against 9000+ humans in a coding challenge [3]. So whatever you pick, it should be fairly AI-proof.

Data will be around, and anything that deals with data will be helpful. Even if you could tell AI to do whatever, it has to pull data from somewhere. Spreadsheets are great. Databases will be around for a long time. The top 3 most used DBs or so use some variation of SQL.

There will also need to be some kind of front end for data for people (and even AI) to use. Low/no-code has been around forever but there's always a domain it can't solve. Something specialist like Shopify, Magento, WordPress that solves a problem millions of people have. If you want something that combos well with higher risk work, you can learn UI/UX.

Again, low risk, low returns. The absolute lowest risk is food. Everyone needs to eat. Farming and cooking will keep you from starving, but probably won't take you much higher than that.

[1] https://www.gwern.net/GPT-3#prompts-as-programming [2] https://twitter.com/sama/status/1081584255510155264 [3] https://challenge.openai.com/codex/leaderboard


Out of curiosity, why? Perspective projection is based off of an old insight from hundreds of years ago:

https://en.wikipedia.org/wiki/Pinhole_camera

Projective geometry and the perspective projection matrix (extrinsic and intrinsic parameters of your modeled camera) is basically the "mathified" version of that. Or just "perspective drawing", as artists used it for 100+ years by now.

In my opinion, computer graphics is basically applied math (or linear algebra). (You even don't need linear algebra or matrices to render stuff on the screen, but it will be painful to keep track of what's going on.)

Math hardly dies off, it seems. Maybe some methods change. Like calculating results by using some geometry and measuring its length.

However, unless we don't have any need for projecting something from 3D (world/scene) to 2D (image, monitor, photo, photosensitive sheet, ...), I think we can count on it for a long time.

What changes might be some algorithms. Earlier, we used the "edge walking" algorithm to fill out triangles. Now, we use edge equations that tell you whether an image point is inside a triangle or not. But they do essentially the same, namely, filling out a triangle.

Also, things like the Phong model may stick as well.

There are other methodologies, that are basically still based off of the pinhole camera model. In ray tracing, you shoot rays into the scene and check whether they intersect an object. If it intersects, then you color that pixel with the intersected object's defined color. Otherwise, you leave it out and move on pixel by pixel. So the shooting of the rays from the camera to the scene is basically the reversal of the pinhole camera where light enters the hole and "colors" a photosensitive sheet of paper.

At least this is my understanding of it.

Just learn math, math, math, and you will be fine. Math is the lingua franca for engineering and science, so learn it and understand the concepts from those other fields/branches.


I don't think we're disagreeing. the fact that math doesn't change is exactly why I'm saying it won't go obsolete


Concise and good explanation. :)

So the syscalls of a Linux/Unix machine are the same, b/c of the POSIX API. The POSIX API is a standard for *nix OSes.

Now, we have compilers such as gcc, clang and Microsoft's C++ compiler. Do they decide on their own ABI (specification of how things should be implemented in the lowest levels) then?:

https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html


Okay, let me give another try for this:

So then we have 2 APIs. That is, we have the C standard library (as an API) and the POSIX API. The POSIX API defines the syscalls such as write, read, open, ... The C or C++ standard library provide the header files (function declarations etc.) and its .so file (the shared library) is in the memory. Take the Microsoft C++ Compiler as an example. I cannot see how the standard library is implemented, because only the headers are defined and the actual compiled code "lives" as a .so file in memory. This .so file as an object in memory is accessible to all C/C++ applications. The .so file is essentially C's runtime environment (RTE). (Also, C and C++ are standardized so no matter who implements the compiler and the standard library, it has to follow the language standard.)

The standard library among other things is not only providing a wrapper for syscalls such as malloc and printf (POSIX API), it is also providing some useful functions or algorithms such as qsort, std::transform... Furthermore, data structure or containers can also be part of a standard library: std::vector, std::unordered_map, ...

Now, if we compile a C or C++ program, the ABI is basically a set of definitions/rules, that the compiler has to abide to. Meaning things like how function parameters are stored (stacked or in registers), how a function should be called, how arguments are passed, how operations are mapped to machine code etc. But not only is an ABI a mapping or layout between C instructions and machine code, it is also a mapping between OS syscalls and C or C++ code.

Exception handling in C and POSIX is basically this:

#include <stdio.h>

#include <setjmp.h>

int main() {

  jmp_buf env;

  if (!setjmp(env)) /* try - something might go wrong in the subsequent code block (try block) better bookmark this line */

    longjmp(env, 1); /* throw an exception trigger the catch (else) block by going back to that bookmarked line */

  else /* catch */

    fprintf(stderr, "Yikes, something went wrong! ;)\n");

  return 0;
}

So if I do exception handling in C++, then the ABI of C++ should follow an exception handling routine. For example, virtual functions in C are basically this:

https://godbolt.org/z/x94cdb1Y5

So vtables are structs of function pointers. The C++ compiler has to follow some ABI convention (namely, some rule how to map or translate a vtable implementation such that it corresponds to the C++ code.)

- syscalls ~ POSIX API (operating systems API) ~ write, read, open, ...

- C/C++ standard library (.so files loaded in computer memory) ~ wrapper for syscalls (printf, malloc, ...), containers/data structures/algorithms (std::vector, std::unordered_map, qsort, ...)

-ABI ~ rules for the compiler that tell how vtables, function calls, exceptions should be implemented

Correct so far? Or am I still wrong?


Yes, you seem to have got it.


Yep.

Use some of the -W* flags (-Wall -Wpedantic -Wconversion, ...) and specify the standard -std=c90.

Avoid undefined behaviors:

- https://en.cppreference.com/w/c/language/behavior

- https://wiki.sei.cmu.edu/confluence/display/c

(Also use cppcheck, valgrind, gdb, astyle, make, ...)

Done.

Fun fact: JS and C are both standardized by ISO.


And always use "-fwrapv -fno-strict-aliasing -fno-delete-null-pointer-checks". Those are source of the hardest to reason about UB and the small performance benefit is not worth the risk. I would love to always be certain I haven't accidentally hit UB, but that's equal to the halting problem, so for peace of mind, I just ask the compiler for a slightly safer variant of the language.

P.S. and old code definitely needs them, as at the time compilers didn't optimize so aggressively and lots of code does weird stuff with memory, shifts, etc.


Have a look at Apache 1 (C) or KDE 1 (C++); it'd be interesting to see what you think of those codebases, and they can certainly predate 2001.


Cing through this, I can tell you that you can C C++: https://godbolt.org/z/sWETvsoKd (a demo for polymorphism in C90)

Not only can you do OOP in C, you can also do FP, implement your own iterators and the like. (Function) pointers are your friend: https://news.ycombinator.com/item?id=28378627#28406874

And remember: FP = pure functions (no side effects) + immutable values

Currying in C is essentially taking the address of a function pointer and not calling it. You can do lazy evaluation as well. The Cky is the limit!

Yes, C++ is comfy, but you cannot really do FP with its STL. As STL containers require mutability by default.

std::transform(v.begin(), v.end(), w.begin(), [](auto const& x){ /* ... */});

Transforming the contents of v requires a buffer(!) w. (The rescue: https://github.com/arximboldi/immer)

In Python, this would be something akin to:

map(lambda x: x, t); # t being a tuple type here

This will return another tuple.

Yeah, so these are just some pointers. I just wanted to stress that C can enable your creativity once you get the essence of it.



int const* const x; // C

int const& x; // C++

A reference is functionally equivalent to a const pointer. (Reference reassignment is disallowed. Likewise, you cannot reassign a const pointer. A const pointer is meant to keep its pointee [address].) The difference between them is that C++ const references also allow non-lvalue arguments (temporaries).

It is much easier to read from right to left when decoding types. Look for yourself:

- double (* const convert_to_deg)(double const x) // const pointer to function taking a const double and returning double

- int const (* ptr_to_arr)[42]; // pointer to array of 42 const ints

- int const * arr_of_ptrs[42]; // array of 42 pointers to const ints

- int fun_returning_array_of_ints()[42];

Try it out yourself: https://cdecl.org/

Hence, I am an "East conster". (Many people are "West consters" though.)

You can return function pointers:

typedef struct player_t player_t; // let it be opaque ;)

int game_strategy1(player_t const * const p)

{

    /* Eliminate player */

    return 666;
}

int game_strategy2(player_t const * const p)

{

    /* Follow player */
    
    return 007;
}

int (* const game_strategy(int const strategy_to_use))(player_t const * const p)

{

    if (strategy_to_use == 0)
        return &game_strategy1;

    return &game_strategy2;
}

Functional programming = immutable (const) values + pure functions (no side effects).

Consting for me is also a form of documentation/specification.

"East const" for life! :)


10 or 42?


Thank you. 42. I edited my comment above.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: