Seems like it should be really straightforward to determine if this hypothesis is right by measuring bradykinin levels in Covid patients versus non-Covid patients.
One thing that I think strongly suggests that this hypothesis is wrong is that there is no strong relation between ACE-inhibitors and Covid mortality. Indeed, most of the studies that I've seen suggest that ACE-inhibitors have a somewhat protective effect whereas ARBs actually seem to have a minor detrimental effect [1]. So for the article to claim that covid behaves pharmacologically like ACE-inhibitors seems wrong at face value.
My apologies. I missed it earlier. Here is another article that looks to be in good standing that comes to similar conclusions about ACE-I vs ARBs [2].
> One thing that I think strongly suggests that this hypothesis is wrong is that there is no strong relation between ACE-inhibitors and Covid mortality.
Perhaps you can clear up something for me. We've all heard how obesity and hypertension are risk factors for covid morbidity. But I could never get a clarification regarding treated vs. untreated hypertension.
AFAIK, many obese people take ACE inhibitors to treat hypertension. If we divide obese people into three groups: (a) untreated, (b) treated with ACE inhibitors, and (c) treated with other medications, how do their covid moribity rates compare?
The answer to treated vs untreated hypertension right now is that we simply do not know. What we do know is that treated hypertensive patients don't appear to have significantly worse outcomes.
This article does a great job outlining the current state of the knowledge on the subject of Covid/hypertension as well as some clinical trials that should be posting results early next year [1].
I haven't seen any studies that try and tease apart all of the complex relationships amongst various comorbidities, but I think we have seen pretty conclusively that obesity is a very significant risk factor.
I had this same question early on and I recall reading somewhere that treated hypertension reduced risk. Ostensibly, even if that was from a credible source, it may not mean much though.
The problem is that the relationship between hypertension, antihypertensives, and Covid is going to be very nuanced and difficult to ascertain without large patient populations to study. One of the very unique aspects of the Covid pandemic is that the NPIs seem to be very effective at damping the spread, so much so that the pool of patients to study keeps moving from region to region every 60 days or so.
By the time you design a study, recruit a pool and wait for some of them to get covid, not enough get it for the study to have enough statistical power.
> ACE2 counters the activity of the related angiotensin-converting enzyme (ACE) by reducing the amount of angiotensin-II and increasing Ang(1-7)
The cell/tissue tropism of SARS-CoV-2 must logically have some effect on the function of the renin–angiotensin system (RAS). What is frustrating is that these questions remain unanswered.
> Interestingly, Jacobson’s team also suggests vitamin D as a potentially useful Covid-19 drug. The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN. Again, this could stop potentially deadly bradykinin storms from forming. The researchers note that vitamin D has already been shown to help those with Covid-19. The vitamin is readily available over the counter, and around 20% of the population is deficient. If indeed the vitamin proves effective at reducing the severity of bradykinin storms, it could be an easy, relatively safe way to reduce the severity of the virus.
"Put simply, medics found that severely ill flu patients nursed outdoors recovered better than those treated indoors. A combination of fresh air and sunlight seems to have prevented deaths among patients; and infections among medical staff.[1] There is scientific support for this. Research shows that outdoor air is a natural disinfectant. Fresh air can kill the flu virus and other harmful germs. Equally, sunlight is germicidal and there is now evidence it can kill the flu virus."
That air is a natural disinfectant is super interesting.
I read that vitamin D and observed benefits for those who have vitamin D is a correlation. Meaning that taking vitamin D supplements might not be as helpful as getting sunlight (a natural way to get vitamin D)
What I found interesting about this article is that it isn't based on a correlation of outcomes and vitamin D like most other results are. It's actually looking at potential mechanisms and vitamin D is coming up as a candidate.
I would guess "air as disinfectant" is primarily because of its oxygen content. O_2 is actually a pretty nasty molecule like that. Oxidation is typically a fairly tough reaction to reverse or prevent.
Maybe it's not even a "disinfectant". It's just that you aren't re-breathing stale air with droplets in it, when outside. Assuming, of course, you aren't wearing a mask. :-)
Aytu BioScience is working on an ultraviolet device to kill viruses in vivo. Initial results look promising but it's too early to say whether that would be an effective therapy.
Vitamin D may have modulatory effects on the biochemical pathways linked in the original article. Most pharmaceutical products we study aren't going to directly kill bacteria or viruses, usually they have a particular physical effect on a specific molecule that then leads, through various cellular pathways, to the death or suppression of the pathogen. I highly doubt Vitamin D directly kills anything, it could modulate symptoms or the pathophysiology of COVID-19 the disease, vs. SARS-COV-2 the virus itself.
If "it" means UV light, yes there are UV sterilizers aplenty today. If "it" means outdoors/fresh air, there are a lot of components to that and it's not like "fresh air" enters "in vivo" per se.
Ideally you'd like to understand the mechanism so that you can isolate and optimize the the effect, as well as predict possible interactions/side effects.
Ideally. In reality it rarely happens. And it's certainly not needed, the entire basis upon which evidence-based medicine is founded is that an effective intervention should be statistically measurable in it's effect regardless of our knowledge of the underlying mechanics.
See Dr. Gerry Schwalfenberg's letter on the "Vitamin D hammer" for treating respiratory viruses. While not a controlled study it does point toward an avenue for further research.
They found out that some genes related to ACE in comparison to genes related to ACE2 are more 'expressed' in Covid patients than normally and they conclude that that must have resulted in too much bradykinine. Hmm - that strikes me as kind of roundabout - why they couldn't just measure bradykinine levels directly? Is that too hard?
"Here, we perform a new analysis on gene expression data from cells in bronchoalveolar lavage fluid (BALF) from COVID-19 patients that were used to sequence the virus. Comparison with BALF from controls identifies a critical imbalance in RAS represented by decreased expression of ACE in combination with increases in ACE2, renin, angiotensin, key RAS receptors, kinogen and many kallikrein enzymes that activate it, and both bradykinin receptors. This very atypical pattern of the RAS is predicted to elevate bradykinin levels in multiple tissues and systems that will likely cause increases in vascular dilation, vascular permeability and hypotension."
> They analyzed the human genome and SARS-CoV-2 genome to find out that some genes related to ACE in comparison to genes related to ACE2 are more 'expressed' by the Covid-19 virus than normally and they observed known biochemical pathways that are hypothesized to lead to too much bradykinine.
FTFY. They didn't measure patients. They modeled genomic interactions to make some predictions about biochemical effects on patients. Then they noted that some of those predicted effects correlate with symptoms of Covid patients. They went further and shotgunned a list of treatments which are known to affect the same biochemical processes. The farther along the path of inference, the weaker the conclusions get, but it sounds like a promising arrow for research to me.
> why they couldn't just measure bradykinine levels directly? Is that too hard?
Don't need nearly as much permission or human resources to run computer simulations on offline data as you do to take measurements of patients in the hospital.
> why they couldn't just measure bradykinine levels directly? Is that too hard?
It is. To measure gene expression, you isolate total mRNA and sequence it. This tells you the expression of all genes simultaneously. The protocol is fairly standard, cheap, and quick. That doesn't tell you anything about bradykinine, though, because there is no mRNA that codes for it.
In contrast, no such protocol exists for proteins. Sequencing a single protein is comparatively difficult, and no high throughput device exists that sequences lots of proteins, let alone quantifies their abundance. The traditional lab methods like PAGE gels are slow and labor intensive.
This comment on quantitative protein assays is only partly correct. It is definitely harder than quantitative assays of messenger RNA (aka “gene expression” and not as comprehensive. But there are now hundreds of quantitative proteomic studies that survey 5000 or more proteins in single samples (and their peptide fragments). Search PubMed for the author “Aebersold R”.
it's annoying that this is not a 'new' article/theory. It is an article talking about some research and analysis from TWO MONTHS ago. Almost want to request a (July 2020) marking in the title as we do with old link posts
Bradykinin is a potent part of the vasopressor system that induces
hypotension and vasodilation and is degraded by ACE and enhanced by
the angiotensin1-9 produced by ACE2.
...
This very atypical pattern of the RAS is predicted to elevate
bradykinin levels in multiple tissues and systems that will likely
cause increases in vascular dilation, vascular permeability and
hypotension.
This is a really interesting theory, and explains to some degree why those with darker complexions (lower vitamin-D levels) and obesity (increased rates of hypertension) have higher mortality rates. Would it explain why younger people are much less effected though?
Am I the only one bothered by the use of “theory” when this is a hypothesis and, in particular, the back and forth flipping between the two terms in the article?
It's perfectly reasonable that both terms are used. The article is talking about the bradykinin hypothesis as the basis of a theory of COVID-19 function.
It seems to me like even scientists use the terms not quite interchangeably, but on a spectrum. String theory is still "theoretical physics", not "hypothetical physics", even though it will likely never be tested in our lifetime.
That's because the whole internet idea of "a theory is a well-tested hypothesis" is silly and wrong. A hypothesis is a specific supposition or question about the way things work which if true has some level of explanatory power in a field of study. It can be confirmed or unconfirmed. It's still a hypothesis. An answered question is still a question.
A theory is a explanatory framework for a body of knowledge. Unlike a hypothesis, it's not inherently a question or a guess. That doesn't mean it's "true." It also can be unconfirmed (as you say, string theory) or even demonstrably false (phlogiston theory, Ptolemaic theory) and still be a theory.
Obviously there's considerable overlap between the two concepts and as you say they are sometimes used almost interchangeably. Colloquially, "I'm testing my hypothesis that orally ingesting booze provides protection against infection, which if confirmed will be a key part of a theory of booze immunology" gets collapsed into "I'm testing my theory of booze immunology." Big deal. It really only matters because people let themselves get bent out of shape about the whole "evolution is just a theory" thing.
And since I'm ranting already, evolution isn't "just a theory" because evolution itself isn't a "theory," evolution is the natural phenomenon that is being theorized about.
Right. I'll only add that it's not really an "internet idea". It's a highly idealized, cartoon version of science that has been part of my education since long before I was on the internet.
I could find only the article below on IBM's news section (they created this super computer). Speaking about the results of the 2 day analysis, Jeremy Smith, Governor’s Chair at the University of Tennessee, director of the UT/ORNL Center for Molecular Biophysics, and principal researcher in the study: “Our results don’t mean that we have found a cure or treatment for COVID-19. We are very hopeful, though, that our computational findings will both inform future studies and provide a framework that experimentalists will use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.”
We don’t know the dosage or protocol for stopping the bradykinin storm, or even if this hypothesis is confirmed through more tests ... but it seems sensible to me to make sure one has a healthy level of Vitamin D, and that is actionable for a lot of people.
This works for both shiitake and button (portobello) mushrooms.
This trick came from Paul Stamets, one of the world’s foremost mycologist. He is based out in Casscadia. You may have to track down exactly what “2 days” means... I wouldn’t be surprised if he went looking for a way to maintain his Vitamin D health.
You only need 10g of this four times a week, according to this article. While it is summer time in Seattle, you can prepare a bunch and then dry them, so they last until next spring.
I recommend a bottle of chewables, placed somewhere you're likely to see it every day. It's makes it easier to get others to take their dose as well, if you take the water glass out of the equation.
I can't really see why you would need a supercomputer to do the analysis they did (it shows SC centers are getting desperate for users; I wouldn't even have been allowed to run this code on a supercomputer when I was an academic, because it didn't need interconnect for strong scaling.
The issue is that, from what I can tell, the authors just used R to analyze some data, with no explicit parallelism. You would do a better job just renting time on AWS, saving money for everybody.
In my experience with research computing, if you are able to keep a computer doing active work more than 60% of the time, it will be cheaper to purchase and run that computer yourself than renting it from AWS. That's the case even with commodity machines with only 10G Ethernet interconnect. $15k for a machine is only $0.34/hour over 5 years. That doesn't buy much of an AWS machine. (Yes, cooling, real estate and power are all overhead on that, but researchers often don't pay those costs directly, they are covered by the university with other monies.)
You're completely ignoring the other valuable aspects of being in a cloud: you're close to huge amounts of high throughput storage (blob and DB), and can increase/decrease the size of your fleet trivially. These are critical to nearly all modern scientific workflows (moreso than the raw compute, IMHO).
As for the cost structure for research computing, the argument that the costs are externalized isn't a good one- that overhead that pays for the facility, and the networking, comes out of your grant money, and using grad student time to admin your cluster often just causes your grad students to leave for FAAMG.
> You're completely ignoring the other valuable aspects of being in a cloud: you're close to huge amounts of high throughput storage (blob and DB), and can increase/decrease the size of your fleet trivially. These are critical to nearly all modern scientific workflows (moreso than the raw compute, IMHO).
That has not been my experience. There are lots of scientific workflows that only need 10s of TB at most, yet can still consume lots of cycles.
> As for the cost structure for research computing, the argument that the costs are externalized isn't a good one- that overhead that pays for the facility, and the networking, comes out of your grant money, and using grad student time to admin your cluster often just causes your grad students to leave for FAAMG.
At the universities I've worked at, equipment (large purchases) is except from overhead, or results in a lower overhead charge. (Researchers balk at paying a ~50% overhead rate on a $1million instrument). Using grad student time to admin your cluster is dumb, but I'm more talking about users who need single-digit numbers of computers. If you need real HPC, you're in the world of queues, national and regional supercomputers, etc. etc.
I read the underlying article (https://elifesciences.org/articles/59177) and was unable to find any evidence of that. In fact the paper doesn't mention anything about SUmmit or details on the computations :(
There certainly wasn't any "heavy biochemical calculations"; this work is entirely comparative genomics, so just operating on DNA strings.
> In particular, RNA sequencing (RNA-seq) technology,1 which provides a comprehensive profile of a transcriptome, is increasingly replacing conventional expression microarrays.2 Primary data processing in RNA-seq (as well as in other massive sequencing experiments, including genome resequencing) involves mapping reads onto a reference genome. This step constitutes a computationally expensive process in which, in addition, sensitivity is a serious concern
What good is a paper if their methods are a single sentence? Ugh.
Reads mapping is a massively embarassingly parallel computation, again not something you would need or want a supercomputer for. You mainly need disk IO to/from the source reads and the mapping table you produce.
If it's simple alignment like blast, they don't need the interconnect. But supercomputers provide all the other necessary needs: fast processors, connected nodes, accelerators, massive I/O bandwidth. Why build a cluster without a fast interconnect to complement those features? That's just turning down research customers. And 15% on top of the other hardware is nothing. The main costs is all the power, custom software, and support staff. And not all bioinformatics is ops on strings with minimal message passing.
For these guys, it's likely their best and only option. They probably weren't given the money to build a cluster optimized for their needs or to maintain a cloud instance. Why? Its oakridge, their main priority is HPC physics. It's hard to argue when you have access to such a HPC center. And HPC sites need all the customers they can get, lest their clusters get shoved into the cloud. It's a real fear. They'd end up with hidden costs, data lockin, and poor interconnects. To help pay for those peak massive simulations, traditional HPC need to fill up that last 10-15% and bioinformatics needs most of what they offer. Perhaps all that hard won knowledge will rub off on the burgeoning field too. :)
A vision of bio-oriented HPC is IU's clusters. Though they have a shiny new cray shasta with ampere and slingshot, several of their other clusters are 10 gigs with high mem nodes. All connected to the same storage too. The hospital is the main customer and dictates their designs.
Ditto on the paper though. It's what I disliked about Bioinformatics. All the glory to the researchers designing the experiments and they can't even bother to mention what software they used.
The description of RNA-seq analysis spans nearly the entire paragraph, by my interpretation and limited understanding of the methods.
> RNA-Seq analysis was performed using the latest version of the human transcriptome (GRCh38_latest_rna.fna, 160,062 transcripts to which we appended the SARS-CoV-2 reference genome, MN908947). Mapping parameters were set with a mismatch cost of two, insertion and deletion cost of three, and both length and similarity fraction were set to 0.985. TPMs were generated for all 160,063 transcripts for the nine COVID-19 samples and the 40 controls (Supplementary file 2). The resulting transcript mappings for genes of interest were manually inspected to account for any expression artifacts, such as reads mapping solely to repetitive elements such as the Alu transposable element or all reads mapping to a UTR or pseudogene therein. Transcripts whose counts came solely from (or were dominated by) reads at repetitive elements were removed from the analysis. For the controls cases we ran an outlier analysis using the prcomp function in the R package factoextra. Input data were TPM for transcripts that averaged greater than one across all samples (30,102, Supplementary file 2).
Depends how much of the "supercomputer" the calculation used. If it reserved a single CPU core on a single node, then that'd be cheap, allowing the rest of the system to get on with something else. There's no reason AWS would be cheaper.
The supercomputer in this article is extremely specialized hardware design to maximize peak performance across all nodes. Using just a single CPU core to run a bioinformatics study would be like taking an army tank to drop the kids off at soccer practice.
The supercomputer nodes cost far more, per node, than AWS machines- 15% or more of the budget was spent on interconnect. SUpercomputers don't partition CPUs like that, because interference causes small performance degradation. Unfortunately, CPU is not perfectly compressible- in particular, the cache is shared by all processes that run on the CPU, so if you run another job on the same node, you will see slower performance due to higher cache replacement (this is measured using Cycles Per Instruction).
>I can't really see why you would need a supercomputer to do the analysis they did
The article is fluff.
98% of all bioinformatics is done on "supercomputers" or 'high performance computing environments" saying the researchers used supercomputers to analyze the expression data is like saying someone used a shovel to dig a hole.
The article doesn't simply say they used a supercomputer. It says they used Oak Ridge's Summit supercomputer, the second fastest in the world. Unless 98% of all bioinformatics done on a top 2 supercomputer, your point fails.
No. Although our phones are extraordinary, they are not supercomputers. Supercomputers, more or less by legacy historical defintions, are collections of computing resources connected by a fast network with the goal of scaling to solve problems that could not be solved with conventional systems. Phones aren't really designed to solve that kind of problem.
That said, many problems that previously would have required a supercomputer, can now be solved on phones.
Maybe, I found out a study[1] that suggest that nicotine helps in the vascular metabolism of bradykinin. I'm no researcher neither understand a lot o biology, but it could be a hint
Vitamin D is lipid-soluble - your body will not flush any excess, and build-up can have negative health consequences. Probably don't start taking vitamin D supplements without getting your current vitamin D levels tested first.
The anthropomorphization of computers (especially powerful ones) is pretty annoying. A _person used a supercomputer_ to analyze Covid-19, and I’m guessing the theory didn’t just ‘emerge’ from the computer.
I remember a professor of mine (I forgot in which class) saying that humans generally make up 3 kinds of explanations about a system's behavior:
1. When a system is very simple, we explain its actions in terms of its properties and external forces acting on it.
2. When it's a medium complexity system, we tend to explain its actions in terms of its design, putting ourselves in the designer's shoes.
3. When it's a complex enough goal-seeking system, we begin to empathise with the system itself, thinking why "it chose" a course of action.
I remember how impressed I was with this classification and how much sense it made in terms of how we're able to understand the world and predict what will happen next.
From this angle, much of modern computer software is clearly in category 3, and it just makes things easier for us to think of it as having a mind of its own.
For category three I feel that goal-seeking (or the appearance of such) is sufficient for the anthropomorphism to kick in. The braitenberg vehicles thought experiment is a good example of this.
Quoting from wikipedia "For the simplest vehicles, the motion of the vehicle is directly controlled by some sensors (for example photo cells). Yet the resulting behaviour may appear complex or even intelligent."
I routinely anthropomorphize PID loops when tuning them.
One of the best nuclear astrophysicists I know thinks about stars as entities that want to stay alive -- "I'm running out of hydrogen, what can I burn next?" That approach yields the right phenomenology almost all the time.
Well, yes and no. We say things like "the star wants to go nova" when we mean "stars at this stage in their development tend to go nova" - but the star doesn't want, any more than it dreams or fears to go nova. Anthropomorphizing might help us generalize about things made by people (who do want, dream and fear) though I don't think that's always the case, but I would say it isn't always useful for non-human (and especially non-living) systems.
But if you observe the behavior of a bacterium, operating under similar principles but seeking instead a salt or pH value, you would do well to describe it similarly. The bacterium is seeking a certain condition. It can not think, but it is a machine that acts with purpose and must be modeled accordingly.
I'm curious at what point an organism is considered to be capable of thought. From a mechanistic standpoint, it seems like the goal-seeking behavior of a simple organism could be modeled as a gradient ascent toward a local optima. But even complex organisms are still seeking certain conditions to maximize their ability to survive and reproduce. Is the existence of a brain enough for a creature to be considered capable of thought? How complex does an organism have to be before we no longer think of it as being a biological "machine"?
Or do we simply fall into the trap the GP described, where once something is complex enough that we don't understand what is going on from a purely mechanical perspective we consider it close enough to human to empathize with it?
A much simpler question can be answered: is it possible to be cruel to an animal? The accepted answer revolves around whether or not you can influence the behavior of the animal (or machine). Crabs naturally hide under rocks. If you shock them when they hide under the rocks they eventually stop hiding under the rocks. Meanwhile a bacteria, by itself, will never learn to associate a stimulus with a condition.
That may capture the capability for thought. There are higher levels past it, like learning to model the behavior of other organisms, and those models eventually turning into a theory of self.
Trying to follow that wiki article. Is the point that these vehicles appear to have goal-seeking behavior, but that is probably a projection on the part of the observer - and really its behavior is simply an emergent result of the incentives/effects programmed between sensor and effector?
For the discussion of your category 3 you might enjoy Dennett’s influential book, The Intentional Stance, which discusses how people reason from models like “the thermostat tries to keep the temperature between X and Y”).
"However, in the absence of detailed knowledge of the physical laws that govern the behavior of a physical system, the intentional idiom is a useful stance for predicting a system’s behavior."
The human brain has certain data structures which it naturally parses and stores well. "Character with motivations taking actions" is one of them.
When you try to describe a complex or subtle thing concisely, you might find it hard. Even if the system is neither animal nor human, you too might notice yourself reaching for "character with motivations" to describe it.
"It is probably more illuminating to go a little bit further back, to the Middle Ages. One of its characteristics was that "reasoning by analogy" was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today."
"When we returned from the interview, some more legal professionals had arrived and there was a lively discussion going on. For me the exposure was a cultural shock, instructive, but also rather disorienting. Of course I knew that lawyers are not scientists, yet the atmosphere of a trade school took me by surprise. Of course I knew that lawyers mainly deal with national law, yet I was unprepared for the prevailing parochialism. (Now I come to think of it, the system of common law, based —as it is— on custom and precedent, could very well strengthen this phenomenon.) but the most disorienting thing was that I found myself suddenly submerged in a verbal tradition that was totally foreign to me! They were on the average very verbose —some even repetitive—, they had a tendency to "reason" by analogy and more than once I felt that speakers cared more about the potential influence of their words than about what they actually said. (Are these common professional deformations of the trial lawyer?) I spoke for ten minutes, that is, I tried to do so: after several hours of exposure I no longer knew how to address this crowd."
Theories are attempts to explain the mechanism of something based on the observed data. Given data on patient symptoms and known drugs (ACE inhibitors in this case) and their effects, a computer could easily produce a theory that the disease acted like ACE inhibitors. It'd still take a human to write the program to generate these theories, but a computer could do it.
I think it is a question of semantics regarding the words generate and theory. Computers cant produce theories because they generate and analyze data, but cant process ideas. Unless the computer is conscious, it can not generate the theory.
> Unless the computer is conscious, it can not generate the theory.
Why would you say that consciousness is necessary for theory generation? It isn't for arithmetic, equation solving, natural language processing or image identification, etc.
>> they generate and analyze data, but cant process ideas
>But what if the data represents ideas?
Then the computer would still be generating and analyzing data, not processing ideas.
>> Unless the computer is conscious, it can not generate the theory.
>Why would you say that consciousness is necessary for theory generation? It isn't for arithmetic, equation solving, natural language processing or image identification, etc.
I think that the conscious analyst/observer is an intrinsic part of theory discovery, in the same way that a computer can not understand Chinese[1].
If the conscious observer is not necessary for a theory to exist, why is the computer necessary either? Certainly the phenomenon and data exist without it?
> I think that the conscious analyst/observer is an intrinsic part of theory discovery, in the same way that a computer can not understand Chinese
Arithmetic was deliberately mentioned, you might as well say "Of course a calculator app on your phone isn't _really_ doing arithmetic, the conscious analyst/observer is an intrinsic part of discovering the correct answer, in the same way that a computer can not understand compound interest".
There is a sense in which you are correct, but it is a very uninteresting one. Practical applications of computation follow from ignoring this semantic debate.
>here is a sense in which you are correct, but it is a very uninteresting one. Practical applications of computation follow from ignoring this semantic debate.
I think the fact that a computer can execute a program to compound interest isn’t a particularly novel one or interesting idea to me.
Going back to the original article, I think it was an unnecessary and incorrect anthropromorpiziation to write a computer discovered a theory of disease. Why isn’t my lazy laptop curing diseases?
I think there is a lot of interesting ideas in this semantic area. Can a computer compose all possible melodies and release them into the public domain[1]. If I write a script that formulates and posts every combination of "x variable cures cancer", did the computer or I discover a theory? If no, what are the minimum requirements?
> A theory has to make a falsifiable prediction. A correlation is not a falsifiable prediction
Yes, you're correct. However, identifying the correlations is a necessary precondition to making a theory about them. And the automated analysis can help with that step.
I wonder if this only says something about the author or about us.
You might phrase a title that way if you believe that we trust computers more than we trust scientists. Is that true? I don't think so. But if it is, how horrifying.
This is really common for big companies like Google: any mistake is attributed to the algorithm, as if it had programmed itself. Of course, the money goes to the corporation, not to the algorithm.
I did not interpret "emerge" and emerging from the computer, any more than something like "LIGO reports very unusual data, and a new theory of physics has emerged".
I was wondering about that strange way to put it as well - was there a particular reason the article didn't want to mention the team behind the theory?
Also, it prevents one from having to get into too much detail: was it a group of persons? were other tools used? What does 'analyse' mean in this context?
Computer scientists this, researchers that. Come now. This humanization of silicon is annoying. Let's give credit where credit is due: electrons did the calculation, while humans stood and watched.
One thing that I think strongly suggests that this hypothesis is wrong is that there is no strong relation between ACE-inhibitors and Covid mortality. Indeed, most of the studies that I've seen suggest that ACE-inhibitors have a somewhat protective effect whereas ARBs actually seem to have a minor detrimental effect [1]. So for the article to claim that covid behaves pharmacologically like ACE-inhibitors seems wrong at face value.
[1] https://www.nejm.org/doi/full/10.1056/NEJMoa2007621