Hacker Newsnew | past | comments | ask | show | jobs | submit | falsissime's commentslogin

> The user has no realistic way of implementing the `call/N` builtin themselves.

Not sure what you mean by realistic, but `call/1` can be implemented by having one simple rule for each existing predicate (now for the sake of the argument ignoring control constructs, which require somewhat more complex processing first) plus a rule for uninstantiated variables and one for an inexistant predicate. And `call/N, N > 1` can now be defined based on it.


Yes, enumerating all predicates in the system and meta-interpreting all control structures seems unrealistic to me. In the sense that no Prolog application developer (as opposed to a Prolog system implementor) would want to do it. Except maybe as an intellectual exercise.

Of course you wouldn't really need to enumerate all predicates in the definition of call/1. You could first run a whole-program abstract interpretation to identify just the ones that can actually be meta-called. Much more appealing :-)


Yes, this technique has been used by several implementations. And any application developer can use `asserta/1` for the very same purpose. Just one rule, that is certainly much more appealing.


You lost this bet: Write append3/4 which appends three lists to a fourth, such that append3(Xs,Ys,[e],[]) terminates.


For many languages, the lexical part requires an "eager consumer rule"/"maximal munch" in addition to the actual grammar, whereas the remaining grammar does not.


Support of as many modes as possible is an aim that is often too ambitious. Instead, unsupported modes should be indicated with an instantiation error or (much rarer) an uninstantiation_error. In this manner the logic remains intact as long as no error occurs and the program does not loop. And yes, this may come at the expense of its usefulness, think of (is)/2.

YeGoblynQueenne omitted what the mode +,- actually means. And the program does not check it. Does it mean that the first argument must be ground or is it OK, to leave some variables inside? Think of `[a,f(X),b]`. Also the - has sometimes the meaning of being descriptive (that is a completely steadfast argument) or prescriptive (meaning that the program may be incorrect if the argument is not an unaliased variable). Would errors be used for the unintended cases things would be much clearer.


Good points and I have to hang my head in shame. Prolog doesn't enforce mode declarations in documentation and I don't either, most of the time, in my code. I also use modes imprecisely and never define them, before I use them, in my documentation, which if anyone ever looks at my code and tries to run it, would create confusion.

For instance, I use "-" to indicate both a term that's an unbound variable on entry and one that can be partially instantiated on entry. Obviously that will cover different behaviour in different programs. That's not very nice of me. At least, since I've been bitten many times myself by this I try to document the structure of arguments and to give an example or two of calling in different modes, when that is not too hard to do.

I've been complimented many times about the good documentation of my projects, but I feel so embarrassed when people do that. My documentation sucks. It's worse to have half-done documentation than to have no documentation at all :(


Explicit checking of the appropriate instantiations is something for more or less official interfaces, not for every internal predicate. For those official checking via must_be/2, can_be/2 and the library(si) (all in Scryer) is often all you need. (The SWI-version conflates must_be/2 and can_be/2).

But you could save your beloved cut with minimal effort by using dif_si/2 (voir https://stackoverflow.com/a/20238931/772868) just before the cut. In this manner you would get instantiation errors for exactly the cases you cannot handle.


>> Explicit checking of the appropriate instantiations is something for more or less official interfaces, not for every internal predicate.

But there's no formal concept of "interface" in Prolog, and the module system sucks (or doesn't exist, depending on implementation) and many programs don't ever use it anyway, so instantiations errors are a possible issue at every level of a program. For me at least it's a constant headache knowning how a predicate must be called (hence my practice of documenting the structure of inputs and outputs carefully).

I really don't like the ad-hoc "type" checking in Prolog (as in integer/1 etc). Prolog doesn't have types, because pre-Church logic doesn't have types (and post-Church logic is a mess; typed mess, but still a mess). And yet Prolog terms (in the sense of a functor followed by comma-separated terms in parentheses) are implicitly typed, under unification: T is the type of all terms that unify with T. So making sure that terms are correctly instantiated is the most prudent thing to do when checking the "type" of a term. Unfortunately that adds too much clutter and I personally resent havng to do it. I have seen various attempts to solve that conundrum over the years, for example there's a package for SWI-Prolog that adds Hindley-Milner type checking (brrr). I don't like any of them.

It's a bit weird how that is the kind of thing that Prolog gets wrong that I rarely hear as an actual criticism, as opposed to complaints about the cut, or the imperfect declarative-ness.

I guess all this would be solved in practice by the addition of a static type system, checked at compile time, which would completely change the language of course. Incidentally, I'm currently working on an ancient (30 years old) Visual Prolog project; Visual Prolog is exactly that, a compiled Prolog with a static type system. I feel that while it sure helps catch some errors, it contrives to suck all the energy and life out of Prolog. Prolog is like parcour: it lets you fly around town, jumping and rolling like Spiderman. So sometimes you get a lamppost en plein dans la figure. So what. Who needs type safety anyway?

(She says while changing yet another bloody bandage)

>> But you could save your beloved cut with minimal effort by using dif_si/2 (voir https://stackoverflow.com/a/20238931/772868) just before the cut. In this manner you would get instantiation errors for exactly the cases you cannot handle.

Thanks for the pointer. I had a quick look. I think there's something similar in SWI-Prolog's Picat-style matching with the "=>" operator that is an alternative to ":-". I'm not sure exactly what it does (something about "steadfastness", that you also mentioned, and that is a new term for me) but I prefer to not use libraries that significantly change standard Prolog syntax, for fear of backwards compatibility.


> But there's no formal concept of "interface" in Prolog

Any predicate you write for someone else has an interface. Someone else includes you in a couple of weeks. All predicates that are not truly auxiliary ones which are more like basic blocks of command oriented programming languages - COPLs.

> instantiations errors are a possible issue at every level of a program

That is not the worst that can happen. Nor is non-termination the worst. The worst is if an unexpected failure or success happens that looks like a legitimate answer. However, for many, non-termination is seen as being worse than such real errors. The reason is that there are only very few Prolog systems that reliably catch resource errors or have reliable timeouts. Well, to be true, there is one system. And the others most often abort.

> (hence my practice of documenting the structure of inputs and outputs carefully).

> I really don't like the ad-hoc "type" checking in Prolog (as in integer/1 etc).

integer/1 is one of the orignal sins of DEC10 which remplaced the errors of Prolog I by silent failure. And everybody then followed suit. Mindlessly. There is library(si) that offers integer_si/1 to make things much cleaner.

As for your remarks on type systems for Prolog, one feature they do not provide is a dynamic conversion between regular Prolog and the typed part. There was an attempt some time ago but unfortunately the individual left for FP.

> Who needs type safety anyway?

It seems there are two different objectives: type safety and instantiation safety. Both are often confounded.

> SWI-Prolog's Picat-style matching with the "=>" operator that is an > alternative to ":-".

This approach does not distinguish between instantiation and type errors. Take the sum_list/2 from https://www.swi-prolog.org/pldoc/man?section=ssu

   ?- sum_list(1, N).
      existence_error(matching_rule, sum_list(1, 0, _A)).
   ?- sum_list(L, N).
      existence_error(matching_rule, sum_list(_A, 0, _B)).
And then when adding the suggested catchall rule, both fail. Both. This throws us back by almost half a century. So now DEC10-style errors get into the 21st century. Prolog 0 and Prolog I were better.

   ?- sum_list(1, N).
      false.
   ?- sum_list(L, N).
      false, unexpected.
> "steadfastness", that you also mentioned, and that is a new term for me

Since 1987-09-29 in common use, see the Prolog Digest of that date, posting by Richard O'Keefe. Here is my definition: an argument Ai is steadfast w.r.t. a goal g(A1,..Ai,..AN), when executing this goal is equivalent (complete operationally) to, g(A1,..CAi,..AN), CAi = Ai with CAi a fresh new variable. And more generally an argument Ai is steadfast, if this property holds for all goals.

A good example of a steadfast argument is the second argument of append/3. You can replace any goal append(Xs, Ys, Zs) by append(Xs, CYs, Zs), CYs = Ys without observable effect (except efficiency, resource consumption and the like). But even non-termination is preserved! Another one is the 3rd argument of phrase/3.

Or take your run_length_encoding/2/3, where the second argument is steadfast. You (presumably on purpose) accumulated in the second argument all the elements only to reverse them finally in a highly lispeling manner. At least no destructive nreverse.


>> That is not the worst that can happen. Nor is non-termination the worst. The worst is if an unexpected failure or success happens that looks like a legitimate answer. However, for many, non-termination is seen as being worse than such real errors. The reason is that there are only very few Prolog systems that reliably catch resource errors or have reliable timeouts. Well, to be true, there is one system. And the others most often abort.

Yes, those "right for the wrong reasons" results are a bad problem, too.

Which system are you referring to? I mainly use SWI-Prolog and in my experience it does exit with error when the stack size limit is exceeded, and it will even give you a message about possible unbounded recursion. Although on my Windows machine it then often crashes immediately after because it's left with no memory to recover from the error. I normally don't try-catch such errors because they're most of the time a sign of programming error (a well-written program should go into an infinite recursion without blowing the stack).

I've had more mixed experience with SWI-Prolog's resource-limited call/1 variants, like call_with_time_limit/2 and call_with_depth_limit/3 and call_with_inference_limit/3. The first one seems to work OK. The other two are a bit hit-and-miss, although there's warnings about that in the documentation if I remember correctly.

>> Or take your run_length_encoding/2/3, where the second argument is steadfast. You (presumably on purpose) accumulated in the second argument all the elements only to reverse them finally in a highly lispeling manner. At least no destructive nreverse.

Oh, is that lisp-y style? I had no idea. I use it because it makes tracing the program easier. There's a bit of overhead, but I feel it's made up for in spades by the ability to debug errors. If you put an accumulator variable in the head of the goal, it is only bound when the recursion terminates and the stack "unrolls", so you don't get to see the bindings of the accumulator and it's harder to verify the logic of the program. I normally trace with the "unify" port non-leashed but I don't think that changes anything.

>> A good example of a steadfast argument is the second argument of append/3. You can replace any goal append(Xs, Ys, Zs) by append(Xs, CYs, Zs), CYs = Ys without observable effect (except efficiency, resource consumption and the like). But even non-termination is preserved! Another one is the 3rd argument of phrase/3.

Thanks, this is a clear explanation. To be honest, I don't stay up-to-date with discussions about Prolog programming, as I did in the past. Even in the discussions that I've followed there's a lot that turns around principles that don't mean anything to me (as I said in another comment, I really can't be bothered about declarative, vs non-declarative programming, for example) and so I'm a bit suspicious of jargon that seems to be pointing to such principles. Maybe steadfastness is a more practical consideration. I'll have to think about it.


> Which system are you referring to?

SICStus. The best for catching resource errors (that is with catch/3) and continues thereafter happily. SWI does catch many situations, but as you observe it is much less reliable. SICStus is also best for timeouts. I have not tested SWI recently, only noted that the compatibility library(timeout) has been updated. The call_with_inference_limit/3 might be interesting. The others you mention have just bad interfaces. But what both systems lack is a reproducible notion of time on a lower level. Walltime is not useful, as it is load dependent, CPU-time depends on TLBs, caches and all that. Think of number of (completed) CPU-instructions. PAPI-wise. While there is some JIT-tering in both systems this is still the best. Anyway, I start daydreaming...

> they're most of the time a sign of programming error

Often yes, but then you need to explain such errors which incurs a lot of attempts to execute fragments of the looping program. Like with a failure-slice https://stackoverflow.com/tags/failure-slice

And then, there is another kind of programs that effectively do not terminate but are still useful. Those that try to find counterexamples (posh expression for bugs).

> Oh, is that lisp-y style?

Recursive functions a la (cons x (recursive ...)) are not tail recursive (or at least have been not in the 1970s), whereas the corresponding code in Prolog is. So they were often transformed with an auxiliary argument that collected all conses in the wrong direction and were nreverse-d thereafter since (ideally) noone else referred to that list. I believe this has been solved in the meantime.

And yes, with a step-by-step tracer, this style makes things easier to watch. OTOH, you could use purer methods for debugging (for the pure part of your programs). At least, when you are into constraints, the tracers are of no use any longer and you have to switch.


For one, the second argument should be (upon success at least) a list. But in your program it isn't a list for `[]`.


Are you talking about this?

  run_length_encoding([], []-0):- !.
Yeah, well spotted, that looks stupid. Who knows what I was thinking...


(It takes some time here, before the reply link appears)

While I do not know what you were thinking either, I do know that you insisted on a mode +,- and thus refrained from using the most general query to test your program just in case. Prolog would have shown you the problem right in the first answer!

    ?- run_length_encoding(L,E).
       L = [], E = []-0, unexpected
    ;  ... .
While the most general query often leads to an unfair enumeration of answers/solutions, it is still a useful and effortless way to test a program, just in case.


>> (It takes some time here, before the reply link appears)

Yes, I think that happens when threads exceed a certain size, perhaps also when replies come at a certain rate. I believe it's a measure put in place to cool down hot heads revving up to a flame war, or something like that.

Or maybe it's just a measure put in place to avoid people kicking a programmer for an error she made in decades-old code >:P

(Thanks for the advice though, it's good advice and useful for programmers at any stage of their evolution).

P.S. I kept thinking of what you said that Kowalski said regarding Colmerauer's knowledge of meta-interpreters. I'm really curious to know and there's a couple other questions I have he might be interested in answering so I plan to write him an email and point him to your comment (https://news.ycombinator.com/item?id=34243808).


(It seems one needs to go into a post directly to be able to reply more rapidly. The delay seems to be reserved for the thread-mode.)

Before sending an e-mail, check https://www.softwarepreservation.org/projects/prolog/ there are so many original sources now online that help to reconsider some perceptions. In particular, the Prolog I manual is there, but in a very bad shape (bad scan from a used up chain printer, probably a 3203, all caps, no accents ...).


>> (It seems one needs to go into a post directly to be able to reply more rapidly. The delay seems to be reserved for the thread-mode.)

Oh, right, good catch!

That's an amazing resource you linked me to. Thanks!

On the other hand, it's such an awful feeling some times that all the activity of the early years of logic programming has died out, that I've come to it too late, and that so many of the people who created the field are now retired or gone.


> ... all the activity of the early years of logic programming has died out, ...

Of course, now there is activity of the current years! What else could we have now? If you look back, there was not only progress but also quite a lot of regress. Like, dif/2 in Prolog 0, then to stay submerged for so long, resurfacing in Prolog II and Mu etc. Errors in Prolog I, then abolished in DEC10 at the expense of incorrectness, only to resurface later on, but still not entirely recovered...


Btw, the reason I want to write to Kowalski is not just the history of meta-interpreters. There's a question that only he may be able to answer. The question is about the Subsumption Theorem in Inductive Logic Programming (ILP).

Briefly stated, the Subsumption Theorem, as it's known today, is that a clause D is entailed by a logic program S if there exists a clause C that can be derived from S and such that C subsumes D.

Similar statements are also found in the logic programming literature, for example in Lloyd's Foundations of Logic Programming.

There's a bit of a mystery about the origins of the Theorem.

A series of similar papers by Ninenhuys-Cheng and de Wolf point out different definitions and apparent rediscoveries of the Theorem, and proofs of it, some of them mistaken, in the ILP literature. The paper I use as my reference is titled The Subsumption Theorem in Inductive Logic Programming: Facts and Fallacies. (http://homepages.cwi.nl/~rdewolf/publ/ilp/ilp95.pdf found in Ronald de Wold's webpage, here: https://homepages.cwi.nl/~rdewolf/).

That paper claims that the Theorem was probably first stated by R. C. T. Lee, in his 1967 doctoral thesis. It seems this information comes from Kowalski, in a paper that I can't access fully (cited in the Nienhuys-Cheng and de Wolf paper). However, the earliest statement, and proof, of a Subsumption Theorem that I could find is from Robinson's resolution paper, published in 1965, where subsumption is also defined. The Subsumption Theorem in Robinson's paper is slightly different than the one found in the ILP literature, in fact it looks to me like the modern version of the Subsumption Theorem is a corrolary of Robinson's one.

If I were to rephrase Robinson's Subsumption Theorem in more modern terms, what it says it as follows:

[Restated Subsumption Theorem] Let S be a finite set of clauses and C ≠ D be two clauses that are not in S and such that C ≼ D (C subsumes D). Then, S ∪ {C,D} is satisfiable if and only if S ∪ {C} is satisfiable.

So this statement is missing the explicit requirement for derivation of C from S, but if S ∪ {C} is satisfiable then C should be derivable from S by Resolution.

Now, R. C. T. Lee's thesis is not available anymore, it seems, and it wasn't available to Nienhuys-Cheng and de Wolf, either, so there's a lot of uncertainty around what, exactly, Lee stated and proved, and how it is similar or different to Robinson's Subsumption Theorem, and to subsequent statements of the Subsumption Theorem in ILP. Kowalski may be able to clarify the confusion.

I'm posting this in the very slim chance that you might have some intuition about all this, in which case I'd be grateful to hear it.


There is a minor impurity in this code: `N = 1+1, rle([a,a],[[a,N]]).` succeeds, yet `rle([a,a],[[a,N]]), N = 1+1` fails. Add `:- op(150, fx, #).` and use it like `#CountPlus1 #= #Count+1` etc.


> but the burning problem that datalog does fix is Prolog's semi-decidability, or in other words, its tendency to enter infinite recursions.

Prolog's built-in search is not semidecidable. https://en.wikipedia.org/wiki/Decidability_(logic)#Semidecid...

As you observe, Prolog gets stuck in left-recursions. But iterative deepening for Prolog is semidecidable (among other fair methods). This indeed is restricted to the pure, monotonic subset of (full) Prolog together with all pure, monotonic extensions.


>> As you observe, Prolog gets stuck in left-recursions. But iterative deepening for Prolog is semidecidable (among other fair methods). This indeed is restricted to the pure, monotonic subset of (full) Prolog together with all pure, monotonic extensions.

Thanks, yes, I might be fudging terminology a bit.

Generally, when I talk about Prolog I mean definite programs (sets of definite clauses and Horn goals, the latter usually called "definite goals") under SLD-resolution. That's more or less what is usually meant by "pure" Prolog: definite programs, without not/1 implementing Negation-As-Failure, without the cut, and without side effects; and executed by giving an initial Horn goal as a "query". Entailment between definite clauses and satisfiability of definite programs is undecidable.

By "semi-decidable" I mean that an SLD-refutation proof will terminate if there is a branch of the proof that ends in □ and is of finite length. If such a branch does not exist, a proof will "loop" forever. That's regardless of whether the branch succeeds or fails, which is a bit of a fudge, but, in practice, there is no other way to decide the success or failure of a proof than to search all its branches.

Left-recursion is one way in which Prolog, with depth-first search, generates branches of infinite length, but you can get those with right-recursion also. There are restrictions of Prolog, like SLG-resolution (similar to DFS with iterative deepening) that don't loop on left-recursions but the general case remains undecidable.

Fortunately, there seem to be at least as many finite proofs as there are infinite ones, or in any case I have never encountered a Prolog program that looped inintially and that couldn't be rewritten to terminate, at least when called in the way it was meant to be called. And that's also a bit of a fudge.


Any progress for Quantum conformity wise? It's doc reads is a full ISO Prolog implementation https://quantumprolog.sgml.io/docs/langreference.html


op(150, fx, #) would make the SWI code more readable.


The 1978 DEC10 Prolog user guide mentions PROG.PL as an example of a Prolog file with an extension. See 3.2 in https://userweb.fct.unl.pt/~lmp/publications/online-papers/U...

Maybe the 1975 Prolog I manual has more to say...


Good work!

I think it's even more explicit on p25 where it says "A module name can have the form either file.ext or just file. In the latter case the extension 'PL' is implied."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: