If any Ada users are here, I have question on one section:
procedure Main is
type Distance is new Float;
type Area is new Float;
D1 : Distance := 2.0;
D2 : Distance := 3.0;
A : Area;
begin
D1 := D1 + D2; -- OK
D1 := D1 + A; -- NOT OK: incompatible types for "+" operator
A := D1 * D2; -- NOT OK: incompatible types for ":=" assignment
A := Area (D1 * D2); -- OK
end Main;
> The predefined Ada rules are not perfect; they admit some problematic cases (for example multiplying two Distances yields a Distance) and prohibit some useful cases (for example multiplying two Distances should deliver an Area). These situations can be handled through other mechanisms
I can get why you have to be explicit in type casts if you're trying to be safe, but is there any way to say that
type Area is Distance * Distance
Or
Type CubeVolume is Distance * Distance * Distance
So that of I say a: Area and b : CubeVolume and a := d1 * d2 it just works, and then b := a * d3 also works.
IOTW, explicit casts not needed.
Or is it the case that Ada really took the concept of "explicit is better than implicit" and went to town?
I just wonder because I love F#'s unit of measure types and how they can prevent logical type errors (say, multiplying 10 m/s by 3kg when assigning to a variable of type m/s^2).
Or for this case, if I wrote
x: Distance := y: Distance * z: Distance
It'd fail hard, because a Distance * a Distance is an Area.
I sorta figured Ada would have invented this sorta stuff and then it was cribbed by other languages.
So yeah, wondering what the other mechanisms the author mentions are. Because I've always heard Ada is great at type safety, so I'd be keen to see how safe it makes your code when combining values with units that aren't logical to combine.
I am not an Ada expert, but I came up with this. Please notice that the idea here is not to have the compiler do proper dimensional analysis but simply to avoid the default operator overloading of "*" for the Distance type that also returns a Distance value.
Nothing very high-level, as we need to explicitly manipulate the single-component records, but it does the job.
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
type Distance is record
Value : Float;
end record;
type Area is record
Value : Float;
end record;
function "*" (Left, Right : Distance) return Area is
begin
return (Value => Left.Value * Right.Value);
end "*";
D1 : constant Distance := (Value => 10.0);
D2 : constant Distance := (Value => 20.0);
-- D3 : constant Distance := D1 * D2;
-- Does not compile. GNAT returns the following error:
-- main.adb:21:33: expected type "Distance" defined at line 4
-- main.adb:21:33: found type "Area" defined at line 8
A : constant Area := D1 * D2;
begin
Put_Line ("Area A is " & Float'Image(A.Value));
end Main;*
with Ada.Text_IO; use Ada.Text_IO;
with System.Dim.Float_Mks; use System.Dim.Float_Mks;
with System.Dim.Float_Mks_IO; use System.Dim.Float_Mks_IO;
procedure Main is
subtype Distance is System.Dim.Float_Mks.Length;
subtype Area is System.Dim.Float_Mks.Area;
D1 : constant Distance := 10.0*m;
D2 : constant Distance := 20.0*m;
-- D3 : constant Distance := D1 * D2;
-- Does not compile. GNAT returns the following error:
-- main.adb:13:08: dimensions mismatch in object declaration
-- main.adb:13:37: expected dimension [L], found [L**2]
A : constant Area := D1 * D2;
begin
-- print: Area A is 2.00000E+02 m**2
Put ("Area A is ");
Put (Item => A);
Put_Line ("");
end Main;
Oh, thanks for pointing it out. Honestly, I was more interested in leveraging the type system than on proper dimensional analysis. I could find no mechanism by which Ada allows the programmer to disable the default operation overloads, so I decided to try that out.
There is now, at least with Ada 2012 and GNAT[0][1]. While I'm pretty sure only GNAT has implemented it, I'm not sure if it's actually in the 2012 standard or not.
I haven't played with it yet, but it does looks pretty interesting.
type Distance is new Float;
type Area is new Float;
function "*" (Left, Right : Distance) return Area is
temp : Distance;
begin
temp := Left * Right;
return Area (temp);
end "*";
D1 : Distance := 2.0;
D2 : Distance := 3.0;
A : Area;
begin
A := D1 * D2; -- OK
Put_Line(A'Image);
end Main;
Interestingly, it took me several attempts to create this overload, make it a single line return would make it recursive:
function "*" (Left, Right : Distance) return Area is
begin
return Left * Right;
end "*"; -- ^ Does not work
-- raised STORAGE_ERROR : stack overflow or erroneous memory access
Also:
function "*" (Left, Right : Distance) return Area is
begin
return Area (Left * Right);
end "*"; -- ^ Does not work either, compiler gives me "ambiguous operand in conversion"
IIRC, the one-liner here [assuming you disabled the default "" operator] would be:
Return Area( Distance'Base'(Left * Right) );
The apostrophe at the inner portion is 'qualification', a method for directing the compiler that the enclosed value is supposed to be a particular subtype. (Typically used to resolve ambiguities.)
One of my favorite features of Ada 2012 are the design-by-contract checks taken from Eiffel[0]:
Preconditions: "a condition or predicate that must always be true just prior to the execution of some section of code or before an operation"[1]. It is up to the caller (client) to set up the preconditions and ensure that they are true before calling the code in question. If any preconditions are not met, it is the fault of the caller (client).
Postconditions: "a condition or predicate that must always be true just after the execution of some section of code or after an operation"[2]. The code in question guarantees that the postconditions will be true after the code is executed. If any postconditions are not met, it is the fault of the callee (supplier).
Invariants: conditions or predicates that "can be relied upon to be true during the execution of a program, or during some portion of it"[3]. Both parties must ensure the invariants hold.
These features make it very easy to determine where a bug is in the code and make it very explicit what is expected of the caller (client) and callee (supplier).
They act as run-time sanity checks and push Ada / SPARK code in the direction of Haskell function signatures and types. With proper preconditions, postconditions, and invariants in place I think it should be possible to implement a QuickCheck-style[4] testing system to provide some empirical checks if SPARK proof checking is not used.
I would love to see these design-by-contact features added to Rust, C++, and even C.
C++20 was supposed to have them, but they were removed on the very last minute, if we are lucky maybe C++23 will get them, but C++26 is most likely, if they ever go forward with them. [1]
For C you can have a look at Frama-C. [2]
D also has DbC. [3]
In case you can target Windows only, VC++ supports SAL Annotations, while not DbC they help to improve code security
The problem I’ve seen with trying to add non-static contracts to performance-sensitive languages has been ideological rather than technical.
Specifically, do the language semantics allow a mode where these checks can be turned off? Leaving them on all the time can be a huge performance burden, both directly and in the barriers they add to optimization (while there are potential optimization benefits to exploiting the guarantees of contracts, I don’t think as a practical matter they balance out).
But allowing them to be turned off in “release mode” eliminates much of the benefit in all modes: Developers can no longer assume that their code always runs in a context where the contractual guarantees are met, so they have to guard data integrity or safety critical operations dynamically anyway.
I guess it's as bounds check and other runtime validity checks integrated in the language. When we upgrade compiler versions, I usually run a global app profile (perf, and internal tools you must have when latency and cpu load are actual product requirements. Every 'new' hot spot is analyzed and if it's a contract, I check whether there's a less costly way to perform it, or if watering down the contract seems OK (usually not). The thing is, a precondition (especially if you went through AoRTE, or silver-level SPARK) can authorize you to aggressively remove all runtime checks (and get far more performant code) 'behind'.
> Specifically, do the language semantics allow a mode where these checks can be turned off? Leaving them on all the time can be a huge performance burden, both directly and in the barriers they add to optimization (while there are potential optimization benefits to exploiting the guarantees of contracts, I don’t think as a practical matter they balance out).
Yes, with Ada it is typically a pragma or compiler-flag to turn checks off; however, Ada has a long history of having mandatory-checks and strongly-encouraging compiler-implementers to optimize the checks away when it is known they cannot fail (which is a surprising chunk of the time if you're coming from a C-based language).
They can be turned off for release mode, and the SPARK subset of Ada can use them in formal verification, proving that the conditions hold true for all inputs statically, eliminating the need for any checks.
I think the performance impacts are usually not as big as you think (the Ada compiler is smart enough to eliminate redundant checks where it can), and I believe that they can be turned off per package, too, so for performance critical modules they can be deactivated in release mode.
I do not think contracts were intended to raise recoverable errors in a normal application's behavior; if they are triggered, then the program is incorrect, not just encountering an error. I think most developers would leave them on in release mode (if possible) just to get stack traces and exception messages to see what failed.
The paper "Applying Design by Contract" by Bertrand Meyer shows how DbC is to be used. It is sufficient to have only preconditions turned on by default in "release" mode, others are optional.
My thumb rule is to have all preconditions/postconditions/invariants turned on only in the boundary functions (i.e. public api) of a module while the inner cohesive functions only have preconditions turned on.
Any language that has assertions can be used to simulate at least preconditions and postconditions, and maybe some support for invariants.
As the team leader for the second version of a database middleware product developed in C, being convinced by study of the efficacy of assertions for DbC and hence better quality software, I made sure my team used C assertions heavily throughout the codebase, at the entry and exit points of functions, to implement preconditions and postconditions, despite some opposition to it.
End result: the product was a success, and was used in multiple software projects for customers.
We were rewarded well for it.
Edit: I first learned about DbC myself, via reading about Eiffel and Bertrand Meyer's work, early on.
I should add that the product being a success wss not solely due to using assertions for DbC, of course, although, IMO, that did play a significant role in bug detection and hence removal and better product quality. I also ensured that the team consistently applied some other basic software engineering practices, which were not ordinarily followed in that company. The product success was the result of all the practices that were applied and also a good team.
Yes. SPARK/Ada will attempt to verify many of the properties statically. Not everything can be, and not everything can be done without significant effort (that is, more than it may be worth). But it's more than asserts in most other languages which are only verifiable dynamically. For those areas where SPARK can't prove something statically, the checks are used at runtime (like a traditional assert). It's a best of both worlds situation, in that regard.
asserts are just a language mechanism using which you implement preconditions/postconditions/invariants which are program correctness design constructs. Thus it is a systematic (i.e. not ad-hoc) way of using asserts correctly to guarantee program state.
I highly recommend reading Bertrand Meyer's papers "Applying Design By Contract" and "Design by Contract". They are well worth every programmer's time.
That is one reason why I never liked C and C++ was a better option when coming from Turbo Pascal 6.
Although Turbo Pascal isn't Ada, it did allow for a similar approach with stronger types, which C++ also kind of supports (but C compatibility spoils it).
There's a lot of FUD spread online, partly derived from historic facts, partly total bogus.
I like how Luke A. Guest (https://github.com/Lucretia) put it:
"You’ve got to love it when people who [know] nothing about Ada like to tell other people, who know nothing about Ada, things about Ada that they probably heard from someone else, who knew nothing about Ada." -- https://users.rust-lang.org/t/if-ada-is-already-very-safe-wh...
That also applies to lots and lots of comments on this site!
My personal reasons as someone who has learned the language quite intensively but ultimately decided not to use it: (i) Vendor lock-in and too much future dependence on Adacore (other commercial Ada compilers do not count as alternatives to me because they are super-expensive), (ii) I can't always use the MGPL and would prefer MIT licenses of important Ada packages, (iii) the user community is split in a weird way between very professional aerospace and high integrity systems engineers and fairly dilettante hobbyists, but not many users in the space in-between those extremes, and (iv) not enough libraries for mundane tasks/operating system integration/GUIs.
I'm currently using Go. Although I would prefer Ada as a language, (iv) is in the end decisive for my tasks. If I used Ada I'd spend half of my time reinventing the wheel or interfacing with C libraries. I'm hoping to find a use for it in the future, though.
As someone who is going all in with Ada, every time I have to reinvent the wheel I plan on releasing it as a MIT licensed library on github. Hopefully if enough Ada programmers do that, we won’t have to worry about it as much.
Ada was standard in aerospace and defence projects in the UK when I started in s/w back in the 80's, although personally never worked in those areas. It may be that its perceived lack of popularity is tied to its association with those rather more secretive lines of work, although that doesn't in and of itself explain why it didn't become more broadly used - other commenters have mentioned cost, and that consideration was enough to kill another technically excellent language (Smalltalk).
My gut perception of Ada is unfortunately mediated through the murky lens of its bastard offspring PL/SQL, which is by a good distance the least favourite of any language I have ever used, although I'd be willing to entertain the argument that this is in large part due to all the ugly and ill-considered bits nailed onto it by Larry's mob rather than inherent defects in the parent language itself.
On the other hand I really quite like programming in Postgres's version of PL/SQL which I find to be pleasantly consistent and quite easy to understand.
Have only recently started with Postgres, but have been really impressed with the whole product and yes totally agree - the language seems to be designed by people who actually understand the DML tasks that programmers want to perform.
1. Being good is not enough to be popular. In most cases to be popular a technology needs to be actively promoted by a big brand (of at least a non-profit foundation). E. g. Java is popular thanks to Sun and Go thanks to Google. Also it helps to have free of cheap tools.
2. This problem also has another side - C is much more popular
than the language itself warrants.
I see here following reasons:
1. Unix (which is popular in academia since 70s-80s) and later GCC (it was hard to compete with a free compiler at times when most other required an expensive license)
2. Microsoft designated C and C++ as "official" languages for Windows: MS provided IDE - Visual C++ supported only C/C++ [1] and official documentation implies that everybody uses C or C++ to create Windows apps.
[1] Visual Studio later added .Net support, but this not reduced C/C++ popularity because .Net competes mostly with Java.
There are a few; if you head over to comp.lang.ada you can find threads on the issue by people who were involved at the time. As I understand it though there are four or five points:
1. Ada was designed and specified completely before any implementations were extant, it used then bleeding-edge theory and integrated several big concepts/features: this lead to the very first 'implementations' being either incomplete or pretty bad performance-wise.
2. The backlash among DoD contractor's programmers; "Don't tell us what language to use!"
3. The rise of C's popularity; I believe this had massive consequences, ultimately setting back the field of computer science by decades. [Take a look at Multics, VMS, and the Burroughs... then realize how many of their features have been added to 'popular' OSes in the last 10–15 years.]
4. Misunderstanding the compiler's mindset: a lot of programmers take the view that they need to "shut the compiler up" rather than as the compiler helping them out by finding problems.
5. Misunderstanding "mandatory checks" — a lot of programmers are used to C & decedent's "simple" nature and really don't understand how things can be leveraged. A good example here is the sequence F(F(F(X))); if F takes Natural and returns Natural, then there is only one check that is needed for this sequence: N on the innermost call... and if X is defined as natural, even that check can be optimized away.
Wow, reading through some of these examples, it’s pretty clear where VHDL got it’s syntax! I think I do recall some relation between it and Ada, but it’s almost uncannily close. (Looking up now, it was apparently a requirement for it to be based as much as possible on Ada).
VHDL was initially developed by the DoD back in the time when there was a strong push towards unification of programming languages used in government/defense contracts.
Newer VHDL deviates from Ada syntax in some unfortunate ways that can lead to confusion. For example ".all", in Ada is used to "dereference pointers" while in VHDL it is used to import everything from a library.
Also, coming from Ada you eventually realize that many FPGA vendor's tools are non-conforming to the standard in regards to certain features that are seldomly used by hardware designers but are bread and butter for Ada developers. (enum -> int, int -> enum conversions were partly unsupported in the Xilinx toolchains some years ago, not sure if the problem persists.)
The inclusion of Java in the title is misleading: "For those coming from the Java world: there's no garbage collector in Ada, so objects allocated by the new operator need to be expressly freed." So really, you first need a prerequisite which explains how to program without a garbage collector, but this will have a much greater scope than a book that just teaches you Ada.
Yes they are bounds checked, at compile time, or runtime if not able to prove them at compile time.
Overflow is checked.
However both can be disabled via unsafe code pragmas if so desired.
As of Ada 2012, the SPARK proof system was integrated into Ada and you can also use DbC as formal proofs.
Many of the use cases that in C++ would require new/delete are handled by the compiler itself, thus there is an error if when a function is called there is not enough space available.
In the cases that there is a need to explicilty do malloc/free like programming, only malloc (new) is considered safe, manually releasing memory is an explict unsafe operation and marked as such.
In addition Ada allows dynamically-sized arrays to be allocated on the stack and to be returned from a function. Efficient implement of the latter requires rather non-trivial support on the compiler side and C/C++/Rust have nothing like that. Yet in many cases it allows to eliminate new/delete and simplify code.
For example, just consider if C allowed to return a plain C string from a function without any heap allocation. Things like unsafe sprintf/strcopy would never happen then.
Accesses - pointers - are automatically nulled when declared. So you won't silently screw up who-knows-what with an uninitialized access, though yeah, it's not perfectly safe.
Also, I think Ada is fully memory-safe but for initialization. That is to say, the only way in which it's not memory-safe, is regarding uninitialized variables. (That's assuming of course that you don't disable checks.)
I'm not sure if it lets you shoot yourself in the foot regarding invalid type conversions, misuse of unions, that kind of thing.
Pragma Initialize_scalars for the win. But at runtime.
And if you can afford it, Codepeer (static analysis) finds most of the ones the compiler doesn't find. And then if you can live with the Spark subset, you get proof of initialization in 'bronze' mode :-).
In addition to what other's have mentioned in direct response to the examples you mentioned, there are additional safety factors - both by default and opt-in.
Pointers - accesses in Ada parlance - have rules that help ensure you can't have an invalid access. For one, the object itself has to be declared 'aliased' to create accesses to it. There are also rules to do things like ensure you can't create an access to some type at a higher scope than the type itself.
There are also memory pools that you can use. So you can specify that all allocations of a type occur in a specific pool or sub-pool. In addition to letting you control how allocation occurs, you can also use them for memory safety. All items in a sub-pool will be freed when the pool falls out of scope, so you can simply leave deallocations to occur that way.
There are also concurrency tools baked-in, with runtime support at least. Specifically in terms of memory safety, you have protected objects. They will automatically ensure you have a single writer at a time, and will do other nice things like still allow multiple readers. If you need to, you can get a fair bit of control over how it all works.
> dynamic checks (such as array bounds checks) provide verification that could not be done at compile time. Dynamic checks are performed at runtime, similar to what is done in Java.
> SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically. As summarized in the following table, Ada provides strict syntax and strong typing at compile time plus dynamic checking of run-time errors and program contracts. SPARK allows such checking to be performed statically. In addition, it enforces the use of a safer language subset and detects data flow errors statically.
I can get why you have to be explicit in type casts if you're trying to be safe, but is there any way to say that
Or So that of I say a: Area and b : CubeVolume and a := d1 * d2 it just works, and then b := a * d3 also works.IOTW, explicit casts not needed.
Or is it the case that Ada really took the concept of "explicit is better than implicit" and went to town?
I just wonder because I love F#'s unit of measure types and how they can prevent logical type errors (say, multiplying 10 m/s by 3kg when assigning to a variable of type m/s^2).
Or for this case, if I wrote
It'd fail hard, because a Distance * a Distance is an Area.I sorta figured Ada would have invented this sorta stuff and then it was cribbed by other languages.
So yeah, wondering what the other mechanisms the author mentions are. Because I've always heard Ada is great at type safety, so I'd be keen to see how safe it makes your code when combining values with units that aren't logical to combine.