> You both have good points but they really needn't conflict so harshly.
I disagree, lumberjack's point is essentially an appeal to authority - which is dark-age style thinking.
Just represent everything in a machine readable set of axioms, problem solved. You don't need to be an expert in every field, you just need to have a basic understanding of first order logic.
Right, but someone still has to do the representation (encoding the information into the machine readable format), and how can you ever know that someone is encoding it correctly?
In addition, your assumption is that everything can be encoded in an axiomatic language (probably not true), and that we have enough information to encode it all even if it was possible.
> Right, but someone still has to do the representation...
The same people writing papers now.
> and how can you ever know that someone is encoding it correctly?
Reasoning engine. As new data is entered it is run against prior data, to the end user it would look almost like a spell check.
> ...your assumption is that everything can be encoded in an axiomatic language (probably not true)...
That is an extremely safe assumption to make, as the problem has been studied for a long time and I'm aware of no evidence that would back up your position.
> ...and that we have enough information to encode it all even if it was possible.
> That is an extremely safe assumption to make, as the problem has been studied for a long time and I'm aware of no evidence that would back up your position.
> Just represent everything in a machine readable set of axioms, problem solved.
We all look forward to the day that you or anyone else can do this.
emptytheory's suggestion: have tons of time and determination to solve the problem. Caveat: the problem has already been solved and applied by many experts many times, where the main criticism is that some of the time these experts' application over repeated attempts is not 100% consistent.
Your suggestion: encode all necessary knowledge to solve the problem so that a computer can solve it for you. Caveat: this is either being done already by domain experts, or you must go out of your way to navigate the problem based on logic and not experience (tedious, but potentially very rewarding) so as to encode this information yourself (taking some unknown amount of time).
Caveat in all cases: You are just as apt to fuck up along the way as anyone else who possesses the same logical faculties as you, which presumably at least some other experts in question would.
lumberjack's argument may not possess the logical upper hand, but it is a valid concern for people who are mortal, employed (or otherwise occupied with their time), and without access to a medical library. Perhaps the best way to put this is, I look forward to the day when you can prove their reasoning wrong in such a reproducible way rather than completely dismissing a conclusion due to some fault along the way.
> I look forward to the day when you can prove their reasoning wrong in such a reproducible way rather than completely dismissing a conclusion due to some fault along the way.
A set of axioms with a reasoner would do both of those things. That will be web 3.0, it is being worked on.
I disagree, this is a "fallacy fallacy" - you named a fallacy which lumberjack (apparently) used, but that doesn't actually make the argument wrong.
And you seem to ignore the fact that getting a degree takes most people multiple years, and that "student" is an occupation. If you follow the evidence, people can't learn everything because it takes way too much time.
I disagree, this is a "fallacy fallacy fallacy". You seem to ignore the fact that I did not suggest that people can learn everything. I suggested that people learn enough logic to use it as a tool to make learning everything else unnecessary.
Also, the scarcity of time being used to justify the economically reasonable appeal to authority, in the context of global warming / scientific method consensus, may be the most unintentionally funny thing ever.
Wow, I see I'm going to have to break this down Barney style in order to reach you:
The silly argument that started this all off was that you have to be an expert in every field in order to examine complex systems or problems that span multiple domains. This is simply not true, because a complex idea depends upon simpler ideas. These ideas can be formalized, where scientific theory occurs at the edge nodes and verification occurs at well connected nodes. This would allow an individual to select a layer of abstraction to work on - not unlike software development.
This isn't very far off from the present system of scientific journals and peer review.
>This is simply not true, because a complex idea depends upon simpler ideas. These ideas can be formalized, where scientific theory occurs at the edge nodes and verification occurs at well connected nodes.
Only this is a very naive reductionistic epistemology, and not enough to cover modern science.
That would only be true if the finest component of information in "modern science" could not be represented in true/false/unknown. I know that back in the day folks working on cybernetics struggled with something kind of like this in neural networks, where they were stumped by nonsteady state output (they were hoping to represent everything in true/false). The solution was to just increase the layer of abstraction in representing the output, leaving enough room on lower layers to describe nonsteady state as another potential output state. Problem solved. If you've got an example demonstrating your concern, that would be helpful.
The reductionistic part is in the very belief that there's such a thing as a "finest component of information" in the first place.
>If you've got an example demonstrating your concern, that would be helpful.
What I say is that sufficiently rich theories such as those we have today don't have "finest components" in the sense of being parsable down to some kind of "atoms" that are independent of the overall structure.
The whole intelligence lies in the connections between the components, and verifying that them are individually "correct" doesn't say much.
> The reductionistic part is in the very belief that there's such a thing as a "finest component of information" in the first place.
That seems like a major leap. I've heard people propose that there are limits to human understanding due to complexity, but this is the first time I've heard the suggestion that there is some level of information beyond any possible measurement. The lowest level I can think of is existence vs nonexistence - and you are essentially suggesting that there is some other state beyond measurement and therefor reasoning. Of course, such a thing would be impossible to prove... so the scientific method would be of no use. So if what you are suggesting is true, then it would have no influence on what I'm proposing anyway. Wait... you aren't religious are you? I'm not trying to pry or be insulting, but this suggestion would only really make sense in the context of trying to establish a place for religion in science.
As far as the rest, formal logic exists to do exactly what you say can't be done. Your argument sounds more like an appeal to emotion than anything else.
>Just represent everything in a machine readable set of axioms, problem solved
Great idea! Let me just tell my friends, Hilbert and Russel about it! Maybe my friend Kurt will like it too!
/s
Not to mention that most of the problems science has to tackle we cannot even begin to have them formulated in some concise "set of axioms" even if that worked in theory.
I can't be sure from such a short reply, but in this glib statement, and the one you made above, you seem to be unaware of the very real consequences of the Godel Incompleteness Theorem. This is what @coldtea is referring to.
In short: even for a relatively easy-to-quantify universe of discourse like Mathematics, this theorem implies that (of necessity) some propositions will not be provably true or false, or that the system will contain a contradiction. You have a choice of either incompleteness or contradiction (incompleteness seems better).
That's for Mathematics. Now, consider physics, biology, or sociology. And more important, consider that the questions present are at the frontiers, so are very much not reducible to codification. It's a real problem. If you'd ever worked on a really complex, multifaceted, end-to-end science problem (say, weather forecasting, or drug design), you'd have a little more humility.
You saw my link to the open world assumption above... so I'm not sure why you'd go on about the incompleteness theorem.
Can you think of an idea that cannot be represented as true, false, or unknown? Now consider a hypothesis. I'm not saying it would be simple or easy, just possible and preferable to the present system.
As far as my apparent lack of humility: in the interest of not wasting your time or my own, I've truncated my correspondence. From now on, just imagine that all my posts are prefixed with a paragraph in which I grovel before the throne of scientific greatness.
>As far as my apparent lack of humility: in the interest of not wasting your time or my own, I've truncated my correspondence. From now on, just imagine that all my posts are prefixed with a paragraph in which I grovel before the throne of scientific greatness.
You should write some code, that when you press the reply button it appends some form of lexical prostration that is derived from the comment space of the identity you're commenting to :P
In all seriousness, it appears that for the effort that goes into all the signaling that goes in within academia (and to the external world) to all the "real problems" people are solving, automatic approaches to all aspects of how research is conducted will happen because it is more efficient and consumes less energy than say a human being worrying about if their methods paper will be accepted and how to please reviewers, and etc…
I mean the fact that my PI hired me, as someone who didn't graduate from undergrad over all the phds who get rejected for volunteer positions, because i can slap some code together must say something about the direction things are going in this world. But when I tell the postdoc that the reason his spectrograms looks the way they do when he downsamples due to less constructive interference (while also trying signaling to appear humble because how dare some non-degreed folk pontificate on such things as a matter of established fact like the rest of the folks do around here, even when asked for help), he has to go ask the sr. research scientist the next day to only tell me that I was right… that's 24 hours his clunky matlab script could have ran! lol
I'm happy to report that I have zero experience in the postgrad industry. While I'd love to spend most of my time working in pure theory and potentially influencing an entire field, I really don't think I'd be able to put up with some of the antics I've heard about. There is plenty of silliness that occurs in the corporate world, with the information silos and kingdom building, but at the end of the day money talks and bullshit walks - with little delay.
This problem is being worked on, and I'm pretty confident that the solution will be based on the principals of the semantic web. I have a feeling that academia will be pretty late to the party when it comes to implementation though, if half of the stories I've heard are true.
Hey everyone, let's all point and laugh at the primitive still using first-order logic rather than stochastic type theory! What does he think this is, the 1970s? Wake up, bro: a whole century has passed.
I disagree, lumberjack's point is essentially an appeal to authority - which is dark-age style thinking.
Just represent everything in a machine readable set of axioms, problem solved. You don't need to be an expert in every field, you just need to have a basic understanding of first order logic.