If you see that an algorithm has issues with imbalanced classes, you could try to address that, or you could frame your problem as sexism, write a paper with that in the headline and then call Wired.
And in any case, if you read the paper, they examine a single algorithm (Conditional Random Fields) on two datasets (which Wired extrapolates to the entire field), and their own solution is to add constraints saying that it should preserve the ratio of woman:man cooking as in the original dataset. And while there is no loss of accuracy, it also has no improvement, so it's just shifting the errors around. And it has absolutely no analysis of why a CRF would exhibit these properties anyway.
But this is kind of my point, even after you solve these issues that arise from class imbalance (which ML practitioners & researchers are highly motivated to solve these already because they lead to better average performance), you are still left with a bias that is taboo that society will say must be fixed, which cannot be fixed by simply more accurate ML or more accurate data.
The problem is not that ML reproduces the bias that is present in the training set, it is that it amplifies it (a lot).