I agree with you, but also think SCOTUS and similar picks need to require a supermajority vote. That is, I don't have a problem with lifetime appointments to SCOTUS, but I do think if that's the case we need to base those appointments on some kind of trans-partisan consensus.
Tractors are behemoths of modern machinery. I have relatives in the tractor industry, as well as relatives who are farming, and this has been coming for a long time.
Tractors are ridiculously expensive. I was talking with a relative who sells them, and some of them cost more than pretty nice homes in the city we live in (as in, we live in a large metro area, and they can easily cost more than a very large, attractive home).
These things aren't just tilling dirt like you might have in the 20s either, or like you might with a garden in your backyard. There's all sorts of modern planting and harvesting methods, and tractors are sort of the engine that drives a lot of this machinery (which is very large). The technology involved can be complex, and it's often tied to aerial surveillance, soil monitoring etc.
The intellectual property problems associated with Deere and other manufacturers go beyond DRM per se. Deere over the last couple of decades has tried to force smaller sellers to abandon other manufacturers. I.e., "as of X date, if you continue to carry equipment from other manufacturers, we will no longer supply you with Deere equipment." This is extraordinarily anticompetitive when you consider that these pieces of equipment are extremely expensive, and that the dealers are geographically very sparse. So, it's hard on the farmers because if the dealer does become Deere-exclusive they might have to travel far to find alternatives, and it's hard on the dealers, because they are forced to put their eggs in one basket even when the buyers might want (or need) choice.
I've seen families split apart actually in cases where one part of the family is on the dealer or farmer side, and the other is on the Deere side.
Deere really has become anticompetitive in many ways, not just with DRM. If there ever was a case for antitrust enforcement in my opinion, it would probably be with Deere.
When I first read this paper I thought it was thought-provoking and captured the tension being referenced pretty well.
Over time, I've come to see it as pretty dated and misleading.
The problem is that the methods of both "cultures" are pretty black box, and it's a matter of which black you want to dress your box in. Actually, it's all black boxes anyway, all the way down, epistemological matryoshki.
The real tension is between relatively more parametric approaches, and relatively nonparametric approaches, and how much you want to assume of your data. That in turn, reduces to a bias-variance tradeoff. Some approaches are more parametric and produce less variance but more bias; others are less parametric and produce more variance but less bias. In some problem areas the parameters of the problems might push things in one or another direction; e.g., in some fields you know a lot a priori, so just slapping a huge predictive net on x and y makes no sense, but in other fields you know nothing, so it makes a lot of sense.
Another tension being conflated a bit is between prediction and measurement (supervised and unsupervised classification, forward and inverse inference, etc.). Much of what is being hyped now is essentially prediction, but a huge class of problems exist that don't really fall in this category nicely.
I disagree that computational statistics was being neglected in statistics. What I have seen is a new method (NN classes of approaches) got new life breathed into it, and became extraordinarily successful in a very specific but important class of scenarios. Subsequently, the "AI/ML" learning label got expanded to include just about any relatively nonparametric, computational statistical method. Maybe computational multivariate predictive discrimination was neglected?
A lot of what AI/ML is starting to bump up against are problems that statistics and other quantitative fields have wrestled with for decades. How generalizable are the conclusions based on this giant datasets to other data? What do you do when you have a massive model fit to a idiosyncratic set of inputs? How do you determine your model is fitting to meaningful features? What is the meaning of those features? Why this model and not another one? There are really strong answers to many of these types of questions, and they're often in traditional areas of statistics.
Anyway, I see this paper as making a sort of artificial dichotomy with regard to issues that have existed for a long long time, and see that artificial dichotomy as masking more fundamental issues that face anyone fitting any quantitative models to data. It's a misleading and maybe even harmful paper in my opinion.