Hacker Newsnew | past | comments | ask | show | jobs | submit | cldr's commentslogin

Same here; I used to be a huge fan of D but then C++11 came along and solved most of my problems (and some that I didn't know D had).


The problem is that when you work on big teams and are forced to work in C++98 with lots of style guide restrictions. Or when management will ever allow for updating the compilers on the build infrastructure.

Also compiling C++ code in C++11 mode won't make many developers use Modern C++, instead of C compiled with a C++ compiler, as many still do.

Maybe it works for your context, but in a global context we need to have safer languages for systems programming. Which were already available when C didn't had any meaning outside UNIX.


That's true, but you can write C code in D as well, so switching to D won't make people write idiomatic code either. If they won't learn how to use one correctly then they won't care enough to use another correctly.

And if your company won't let you upgrade compilers, then it certainly won't let you switch languages altogether.


> That's true, but you can write C code in D as well, so switching to D won't make people write idiomatic code either

With a Pascal like type safety. You need to be explicitly mark your code as @system to be allowed to do C like tricks.

This alone is a very big advantage.


> compiles down to C++. Execution speed is comparable to C++, mostly better.

Uh, what. Can you clarify?


This seems counter-intuitive to people who don't write compilers, but a language that compiles to C++ can perform "better than C++" because it can emit code that no human would bother writing, usually taking advantage of information available in the source language that couldn't be safely determined by the C++ compiler alone. Whole program optimization, for example, allows you to do a lot of aggressive reorganization of code that would be very ugly if done by hand (think of putting all your code in one file and putting everything in an anonymous namespace).


Then say "better than hand-written C++", not "better than C++". It's like saying "C++ performs better than assembly". No it doesn't. It may perform better than hand-written assembly, and that's completely different.

It's just annoying when people will do anything to get on the "faster than C++" wagon.


May be worth noting that the Chinese didn't name their country China, they named it Zhong guo.


Which is almost more arrogant, since it translates to "The country in the center". It's a little easier to understand why there's a strain of "we are the center of the world, outside our borders are barbarians" when you look at pre-modern borders: China is bordered by oceans, the russian steppes, the xinjian desert, and the tibetan plateau.


Well according to Wikipedia (yeah I know, but I don't feel like searching elsewhere) it may or may not be taken with the meaning "center of the world." And even if it is, it's more of an observation than a boast, since from their perspective they were in the middle of a bunch of other countries like you said. And hey, they call America "beautiful country" so they're not being stingy.


Ouch, camouflage on a tank is a good analogy. Nice response post.

In addition to, as the author encourages, being "weary of the 'by obscurity'" argument (as I'm sure we all already are), I would also advocate being wary of it :)


No it isn't. Every server runs SSH, so this is more like there's a field, and you know there's a tank in the field, but you can't see it.

The next thing you do then is take out your standard radar device which scans the field, and pinpoints exactly where the tank is in 3 seconds, and then you aim your tank buster at that spot and fire.


Or, you put up a fake, camouflaged tank and let the enemy reveal themselves when they attack it. (Leave 22/tcp open as a honeypot, triggering an immediate iptables drop).


In my experience most of the time "attackers"/script-kiddies just scan over a range of IPs for port 22, and if it's not open on your computer, they just move on to the next IP. That's why you get thousands of requests for port 22 and very few on say port 21.

Of course, not that it would stop someone willing to spend more than a few seconds on attacking your server, but still makes the camo analogy a quite nice one in my opinion.


Except unless your only target is that one tank you're not going to scan all its ports.


Seriously, I think "wary" might be the most-misspelled word in the English language right now....


They're are much more common culprits out their.



I was thinking about this the other day and I came to think that Wikipedia's rule about citations is this: Wikipedia does not want to be a source of information. It wants to be a collection/aggregation of other sources of information, and those other sources have to be accessible to others for all time (i.e. not a human being but a book written by a human being).

What I mean to say is that Wikipedia's system is one that does not/will not give article editors credit for original content; it treats them as just collectors and explainers of original content. When the explainers start writing stuff that has no source, they then disappear and leave Wikipedia holding the content which is now unverifiable. And that's just not how Jimmy wanted it to work I guess.

When I look at it that way, the policy doesn't seem that ridiculous. It may be inconvenient, but that's like saying a stack data structure is inconvenient because you can't remove things from the bottom. It's just the way it is; it has advantages and disadvantages.


I agree that Wikipedia's goal is to be the collection of things that we know that we know. So we do need citations. But when an article is wrong and doesn't already have thorough citations for every claim, should we demand citations for every new contribution?

My point is really about process: when someone comes to us with a bit of knowledge, but no citations and limited understanding of the Wikipedia process, how do we react?

The model whereanyone can edit and see their changes live has real benefits and costs. It makes it easy to make your first contribution, but it also means that if an editor doesn't like it, they just revert it. Sometimes, maybe changes should go in a queue or something.

I come to Wikipedia, and dump a ton of graph theory on some page. An editor says "look, this isn't quite how it works, but it looks like you're trying to improve the page. Is this stuff you have citations for? Do you know what textbooks would cover it?"

Maybe my edits don't go live immediately, but it's better than the current situation, where they just get reverted by some guy spouting WP:STQ!

(That's the grain of truth the comment down below that Wikipedia needs to be more like git).


> But when an article is wrong and doesn't already have thorough citations for every claim, should we demand citations for every new contribution?

Ideally, you'd go through the article. You'd remove anything that is "wrong"[1], and find citations for everything else. Then, when someone wants to add anything, you can find a cite for it, or ask them to find a cite for it, and remove it if there is no cite.

This demonstrates why WP can be horrible - anyone doing this to any article would soon find themselves skewered in horrible WP processes. Good cites are mocked as POV pushing, hopeless cites are forced in by people with more time than you.

> I come to Wikipedia, and dump a ton of graph theory on some page. An editor says "look, this isn't quite how it works, but it looks like you're trying to improve the page. Is this stuff you have citations for? Do you know what textbooks would cover it?

Sometimes that's how it works. There are people on WP who are great at helping that style of new editor; finding them mentors or whatnot to help them with the process. it's hit and miss, sometimes it fails badly.

> where they just get reverted by some guy spouting WP:STQ!

WP has made some effort to prevent the over active 14 year old using tools to auto-revert hundreds (thousands) of edits per day.


i.e. wikipedia is an encyclopedia.


Agreed, and well said.


Only that's impossible as it is now (and will be for the forseeable future) because it wouldn't be very wise to give browser apps write access to the local filesystem.


You would not need to be able to edit the offline file. Instead it could still be just a link to the online version of the file, like it is now. But additionally it could contain the actual data that can be imported back to Google servers when it is accidentially deleted from the server.


I was replying to the part where "even if the files are local" i.e. not on the net.


The google drive already has access to the local filesystem, so that is a not a good reason to not have a local copy of the data.


Isn't it the Google Drive application that updates those files?



While I sympathize with the author (it must have hurt pretty bad if it made him go create an entire website dedicated to "google drive sucks") I thought it was quite obvious that files created on Google Drive with their document editor were not actually copied to your computer with the Google Drive program. It's easy to see when you try to open the file with Notepad or a similar local text editor. Additionally you get a warning when you move a file out of the Drive folder which you should always heed.

Not only that, but if it were the actual file on your local drive and not a link, there wouldn't be an option to "make this file available offline". There were cues which the author missed, but they were obviously not prominent enough which is a (the) design flaw.

Let this be a lesson that you should have multiple backups for anything important. I personally have 2 online backup systems and 3 local ones, the worth of which practice I learned the hard way like this author.

Also think twice before you empty the trash - you're manually making these files unrecoverable from the internet, and if your machine went up in smoke after you hit the button, you'd be screwed. That alone warrants serious consideration.


Glad you thought it was obvious. I just learned it after reading the OP's article. Not obvious enough, apparently.


Yeah, like I mentioned, "There were cues which the author missed, but they were obviously not prominent enough which is a (the) design flaw." I focused more on the lessons to be learned but the situation is absolutely very sad for anyone who goes through something like this.

Glad you read the article though, now you're not going to fall for this flaw in the future.


> I thought it was quite obvious that files created on Google Drive with their document editor were not actually copied to your computer with the Google Drive program.

For a techie, perhaps. For a non-techie, that's something they're unlikely to have ever seen happen in their filesystem. Even if they opened it in Notepad, they'd probably not know the ramifications of what's shown there.


If they opened it in notepad they'd see a one-line URL, which is pretty obviously not the 5-page paper they typed. But I suppose many people wouldn't ever open it in anything other than the Drive interface, and I continue to agree it's not obvious enough for certain types of people.


A non-technical user would not think to open it in Notepad, and would likely assume the data is hidden from view but still there.


What cues did the author miss? Why would the author (or anyone) expect the file they JUST copied to their desktop to now suddenly be in the trash can?


Because Drive tells you so.


Will this let us do SRP over HTTP?


HTTP has now been essentially split into a lower transport layer and an upper semantics layer. If someone created an SRP extension for HTTP (see https://bugzilla.mozilla.org/show_bug.cgi?id=356855 ) it would apply equally to HTTP 1.1 and 2.0. Or you could use SRP with TLS ( http://tools.ietf.org/html/rfc5054 ), but this has the same UX problems as client certs.


It's not really boilerplate in the classical sense, he's just talking about headers and lines that have only a brace on them. I think it's a fair statement.

Plus, he could have reduced lines even further to get down to about 6, like

   vector<string> data;
   string line;
   for (ifstream ifile(argv[1]); getline(ifile, line);)
       data.push_back(line);
   sort(begin(data), end(data));
   copy(begin(data), end(data), ostream_iterator<string>(ofstream(argv[2]), "\n"));
And his `return 0` was superfluous so I removed it.

Unfortunately we work with iterator pairs in C++; if we had ranges like D does, we could turn those last two into

   copy(sort(data), ...)
but alas we cannot.


You could use boost::range to make it even more succinct


Not sure what AIUI means, but the very words "exception-safe" usually imply RAII.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: