Hacker Newsnew | past | comments | ask | show | jobs | submit | martingab's commentslogin

In my home-town there is a garage/factory which mainly recruits all kinds of disabled people. They produce e.g. wooden chairs, cup holders and the-like but also purely artistic decorative stuff. All on very different levels i.e. some people nail some wooden sticks together as they have been told, some have complex jobs (if they are able).

For sure, all of the tasks they do during the day can be automated and in-fact are automated at the big facilities. However, people like to buy their products - not only because they like to support them and to give them the chance to contribute to the community (like it was said in the video: they pay taxes and everything) but also because there is a general market for hand-made stuff. There is always a small niche of consumers who prefer hand-made stuff because of its individual charm etc. over the soul-less mass-manufactured alternative. I believe that this demand exists precisely because of the rise of automation (and thus is unlikely to vanish is automation is pushed furhter).

I'd also argue that working in a woodworking shop - being able to actually create something and (if the handicap/IQ allows for it) even be creative - has a much better effect on overall quality of live than working on a assembly-line bullshit job. I don't know of a handicap which does not allow you to work in any of these kind of jobs but does allow you to assort plastic from paper within reasonable amount of time (but I'm sure someone can give me an example; in that case I'd argue that its up to us to find or "invent" a suitable job or some helper-device to enable them to do so - we have the money to do this).

So yes, maybe getting rid of that particular job is the best thing that could happen to the guy in the video - provided there is another job-alternative available that does not only let him add value to the consumer-society but also to the intellectual and creative parts of it as well as of himself (relative to his level of disability).


This was one of the reasons why the use of such tools was strictly prohibited at my former university.

Another argument given was: even if you only have to *think* about using such a tool you are already in a situation where good scientific practices is no longer guaranteed. In other words: if you/the students would have followed all rules of good scientific practise right from the beginning, you would never need to use such a tool. But I guess if you are the developer of such a tool or work in that area of research, you probably see things differently...

Also; how many different ways are there to explain rules on scientific practices within 150 word? How much similarities would you expect from O(100) different students - even if they write independently? - I'm not sure if that is taken into account in such tools. On a different scale: when piping e.g. a typical PhD thesis though such a tool, the first introductory paragraphs will always have red flags (simply because that topic was already introduced 10000 times and everyone read more or less the same introductory textbooks). The important part - the main part of the thesis - of course should be unique (but if the supervisor/examist/committee is not able to "detect" this in their own, well...). Of course literally copy&paste an introduction is still not okay. But -as the blogger also said- this can easily detected by issuing a simple yawhoogle search in case the text already reads suspicious (e.g. if the style of writing varies a lot between paragraphs etc).

So yes, I'd agree that the use of such tools is relatively limited when it comes to "real" scientific works but in this particular case it was quite neat to see how easily you can use it to atomatise the collection of evidences if you have a large class of students...


My first thought was that this is some kind of delayed april fool, since all color schemes looked the same (b/w) to me, until I realised that I have to turn-on javascript.


Depending on the field of research it is quite common to publish only in open access journals [1]. At my university, we are actually only allowed to publish in peer-reviewed OA journals. Most of them provide a paid subscription for the print version while the online version is free and open to everyone.

However, I got the (personal) impression, that this is only well established in fundamental research (which typically comes with few economic interest). As soon as the research is not paid by the state but by private companies (such as in medicine, robotics or any other "applied sciences"), scientists have a hard time to choose a OA journal (i.e. either it does not exist or you are not allowed to publish there). Changing this scheme is of course quite difficult, since too many commercial parties still benefit from it (which likely can only be change by law)...

[1] https://en.wikipedia.org/wiki/Open_access


Abstract: Geometry, calculus and in particular integrals, are too often seen by young students as technical tools with no link to the reality. This fact generates into the students a loss of interest with a consequent removal of motivation in the study of such topics and more widely in pursuing scientific curricula. With this note we put to the fore a simple example of practical interest where the above concepts prove central; our aim is thus to motivate students and to reverse the dropout trend by proposing an introduction to the theory starting from practical applications. More precisely, we will show how using a mixture of geometry, calculus and integrals one can easily share a watermelon into regular slices with equal volume.


We run a stupidly simple (though small scale) gitlab instance in docker (official image its kinda one-click install). You get mattermost automatically, just need to enable it in the gitlab config and it will install/run mattermost within the gitlab container without any troubles. So if you already have (self-hosted) gitlab, mattermost is easy to setup and maintain (actually no extra effort at all). Depends of course on the number of users, I guess...

The integrations with gitlab issues/groups/etc are also look neat but are barely used tbh. Cant compare with rocket.chat.


Can anyone explain to me why exactly startpage.com is not listed? - They also list other meta searchers which have ads. I know some of the rumours about startpage but not mentioning it at all made me wonder what are the criteria for the list or whether I've missed something very bad about starpage (such that it becomes obvious to not list it)...


This site is disguised as something it's not when if you look at it, they're just an affiliate farm. Case in point - look at vpn reviews - they only mention those where they also provide an affiliate link.


They didn't rebuild the Tevatron but still were able to rediscover the top within a different experimental environment (i.e. LHC with tons of different discovery channels) and have lots of fits for it properties from indirect measurements (LEP, Belle). Physics is not an exact science. If you have only one measurement (no matter if its software- or hardware-based), no serious physicist would fully trust in the result as long as it wasn't confirmed by an independent research group (by doing more than just rebuilding/copying the initial experiment but maybe using slightly different approximations or different models/techniques). I'm not so much in computer science, but I guess here it might be a bit different ones a prove is based on rigorous math. However even if so, I guess, it's sometimes questionable if the prove is applicable to real-world systems and then one might be in a similar situation.

Anyways, in physics they always require several experimental proves for our theory. They also have several "software experiments" for e.g. predicting the same observables. Therefore, researchers need to be able to compile and run the code of their competitors in order to compare and verify the results in detail. In this place, bug-hunting/fixing is sometimes also taking place - of course. So applying the articles suggestions would have the potential to accelerate scientific collaboration.

btw; I know some people who do still work with the data taken at the LEP experiment which was shut down almost 20 (!) years ago and they have a hard time in combining old detector-simulations, monte-carlos etc. with new data-analysis techniques for the exact same reasons mentioned in the article. For large-scale experiments it is a serious problem which nowadays has much more attention than at LEP ages, since LHC has anyways obvious big-data problems to solve before their next upgrade, including also software-solutions.


That's exactly the philosophy we follow e.g. in particle physics and its a common excuse to dismiss all guidelines made in the article. However, this kind of validation/falsification is often done between different research groups (maybe using different but formally equivalent approaches) while people within the same group have to deal with the 10 years old code base.

I myself had very bad experience with extending the undocumented Fortran 77 code (lots of gotos and common blocks) of my supervisor. Finally, I decided to rewrite the whole thing including my new results instead of just somehow embedding my results into the old code for two reasons: (1) I'm presumably faster in rewriting the whole thing including my new research rather than struggling with the old code and (2) I simply would not trust in the numerical results/phenomenology produced by the code. After all, I'm wasting 2 months of my PhD for the marriage of my own results with known results which -in principle- could have been done within one day if the code base would allow for it.

So yes, If it's a one-man-show I would not give too much on code quality (though unit tests and git can safe quite a lot of time during development) but if there is a chance that someone else is going to touch the code in near future it will save time to your colleagues and improve the overall (scientific) productivity.

PS: quite excited about my first post here


> If it's a one-man-show I would not give too much on code quality

This makes me a little uneasy, as I'm not too worried about code quality can easily translate into Yes I know my code is full of undefined behaviour, and I don't care.

> PS: quite excited about my first post here

Welcome to HN! reddit has more cats, Slashdot has more jokes about sharks and laserbeams, but somehow we get by.


Are we talking actual undefined behavior or just behavior that's undefined by the language standard?

The latter isn't great practice, but if your environment handles behavior deterministically, and you publish the version of the compiler you're using, it doesn't seem to be a problem for this type of code.


> Are we talking actual undefined behavior or just behavior that's undefined by the language standard?

'Undefined behaviour' is a term-of-art in C/C++ programming, there's no ambiguity.

> if your environment handles behavior deterministically, and you publish the version of the compiler you're using, it doesn't seem to be a problem for this type of code.

Code should be correct by construction, not correct by coincidence. Results from such code shouldn't be considered publishable. Mathematicians don't get credit for invalid proofs that happen to reach a conclusion which is correct.

Again, this isn't some theoretical quibble. There are plenty of sneaky ways undefined behaviour can manifest and cause trouble. [0][1][2]

In the domain of safety-critical software development in C, extreme measures are taken to ensure the absence of undefined behaviour. If scientists adopt a sloppier attitude toward code quality, they should expect to end up publishing invalid results. Frankly, this isn't news, and I'm surprised the standards seem to be so low.

Also, of all the languages out there, C and C++ are among the most unforgiving of minor bugs, and are a bad choice of language for writing poor-quality code. Ada and Java, for instance, won't give you undefined behaviour for writing int i; int j = i;.

[0] https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=63...

[1] https://blog.regehr.org/archives/213

[2] https://cryptoservices.github.io/fde/2018/11/30/undefined-be...

See also my longer ramble on this topic at https://news.ycombinator.com/item?id=24264376


I think its poor practice, but undefined behavior shouldn't instantly invalidate results. In fact, this mindset is what keeps people from publishing the code in the first place.

Let the scientists publish UB code, and even the artifacts produced, the executables. Then, if such problems are found in the code by professionals, they can investigate it fully and find if it leads to a tangible flaw that invalidates the research or not.

You would drive yourself mad pointing out places in math proofs where some steps, even seemingly important ones, were skipped. But the papers are not retracted unless such a gap actually holds a flaw that invalidates the rest of thr proof.

Let thdm publish their gross, awful, and even buggy code. Sometimes the bugs don't effect the outcomes.


> undefined behavior shouldn't instantly invalidate results

Granted, it's not a guarantee that the results are wrong, but it's a serious issue with the experiment. I agree it wouldn't generally make sense to retract a publication unless it can be determined that the results are invalid. It should be possible to independently investigate this, if the source-code and input data are published, as they should be.

(It isn't universally true that reproduction of the experiment should be practical given that the source and data are published, as it may be difficult to reproduce supercomputer-powered experiments. iirc, training AlphaGo cost several million dollars of compute time, for instance.)

> this mindset is what keeps people from publishing the code in the first place

As I explained in [0], this attitude makes no sense at all. It has no place in modern science, and it's unfortunate the publication norms haven't caught up.

Scientific publication is meant to enable critical independent review of work, not to shield scientists from criticism from their peers, which is the exact opposite.

> Let the scientists publish UB code, and even the artifacts produced, the executables. Then, if such problems are found in the code by professionals, they can investigate it fully and find if it leads to a tangible flaw that invalidates the research or not.

I'm not sure what to make of 'professionals', but otherwise I agree, go ahead and publish the binaries too, as much as applicable. Could be a valuable addition. (In some cases it might not be possible/practical to publish machine-code binaries, such as when working with GPUs, or Java. These platforms tend to be JIT based, and hostile to dumping and restoring exact binaries.)

I agree with your final two paragraphs.

[0] https://news.ycombinator.com/item?id=24264376


> Code should be correct by construction, not correct by coincidence.

Glad we agree, if you're aware of how your compiler handles these things, you can construct it to be correct in this way.

It won't be portable at all (even to the next patch version of the compiler), I would never let it pass a code review, but that doesn't sound like an issue that's relevant here.


> if you're aware of how your compiler handles these things, you can construct it to be correct in this way.

I presume we agree but I'll do my usual rant against UB: Deliberately introducing undefined behaviour into your code is playing with fire, and trying to outsmart the compiler is generally a bad idea. Unless the compiler documentation officially commits to a certain behaviour (rollover arithmetic for signed types, say), then you should take steps to avoid undefined behaviour. Otherwise, you're just going with guesswork, and if the compiler generates insane code, the standards documents define it to be your fault.

It might be reasonable to make carefully disciplined and justified exceptions, but that should be done very cautiously. JIT relies on undefined behaviour, for instance, as ultimately you're treating an array as a function pointer.

> It won't be portable at all (even to the next patch version of the compiler)

Right, doing this kind of thing is extremely fragile. Does it ever crop up in real-life? I've never had cause to rely on this kind of thing.

It would be possible to use a static assertion to ensure my code only compiles on the desired compiler, preventing unpleasant surprises elsewhere, but I've never seen a situation where it's helpful.

This isn't the same thing as relying on 'ordinary' compiler-specific functionality, such as GCC's fixed-point functionality. Such code will simply refuse to compile on other compilers.

> I would never let it pass a code review, but that doesn't sound like an issue that's relevant here.

Disagree. It should be possible to independently reproduce the experiment. Robust code helps with this. Code shouldn't depend on an exact compiler version, there's no good reason code should.


> After all, I'm wasting 2 months of my PhD for the marriage of my own results with known results which -in principle- could have been done within one day if the code base would allow for it.

Sounds like it is quite good science to do that, because it puts the computation on a pair of independent feet.

Otherwise, it could just be that the code you are using as a bug and nobody notes until it is too late.


I see your and MaxBarraclough concerns. In my case, there exist 5-6 codes which do -at their core- the same thing as ours does and they all have been cross-checked against each other within either theoretical or numerical precision (where possible). That's the spirit that sjburt was referring to, I guess, and which triggered me because it is only true to a certain extend.

The cross-checking is anyways good scientific practise, not only because of bugs in the code (that's actually a sub-leading problem imho), but because of the degree of difficulty of the problems and the complexity of their solutions (and their reproducibility). In that sense, cross-checking should discover both, scientific "bugs" and programming-bugs. The "debugging" is partly also done at the community level - at least in our field of research.

However, it is also a matter of efficiency. I -and many others too- need to re-implement not because of bug-hunting/cross-checking but simply because we do not understand the "ugly" code of our colleagues and instead of taking the risk to break existing code we simply write new one which is extremely inefficient (others may take the risk and then waste months on debugging and reverse-engineering which is also inefficient). So my point on writing "good code" is not so much about avoiding bugs but about being kind to you colleagues, saving them nerves and time (which they can then spend on actual science) and thus also saving taxpayers money...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: