Hacker Newsnew | past | comments | ask | show | jobs | submit | SLuijk's commentslogin

There is gvim for OSX. But personally I prefer vim inside terminator or terminal. That was before I switched to Arch Linux after 12 years of using mac.


MacVim is the standard OSX.


It takes a few years to get to that stage :(


You are not trying to gauge if you would want to re-watch the film. If other would re-watch the film, maybe, just maybe, you would enjoy watching it the first time.


There is a link at the bottom of the home page called "google.com". It has magic powers!


Why should I link to google.com when that's what I typed in the first place?


That is a good way to approach a new code base, generally. On one project I took on they were putting SQL queries into a Javascript variable on the page and sending to the client. The client was then sending a SQL query back via ajax for the server to process. I hope I was not being arrogant calling it/them dumb!


Caution, there may be loads of inappropriate decisions, but, it doesn't mean all of them, even the most curious ones are. Highly misleading.

I rebuilt a messy WP blog (i18n hardcoding, dead code, etc) from scracth. Turns out the weirdest thing they did (perverting category/post system using additional metadata) .. they did it because of a weird bug the most important plugin they needed which was not open source anymore. Hit me right in the face.

As someone mentioned earlier, one has to explain everything in the system. The code is only the tip of the iceberg.


I think corin_5 was suggesting more that you have the option to display more info on a case by case bases. I understand why you don't show detailed information for all contests by default. But on high profile contests, that you have permission from the expert, it would make sense for you to have the option to show detailed information.


This reminded me of those long sales pages with an email list signup box at the bottom. This is what I got from the article. Break it down into small tasks. Work out what the next easiest actionable task is, like say "Opening an application". Set yourself a time limit on the task. Discover and remove any internal resistance you may have. Use you new found insight now. Maybe on this link to download our to-do list software.

Don't get me wrong the advice will no doubt help some of us, most likely more then the software would.


If Google's algorithm goals where not monetary bassed maybe they would have a different outcome.


Sounds very interesting. I was planning working on a similar setup. Taking and improving on the many chef cookbooks and building around that. But due to lake of resources it may not be possible.

Do you need testers?


Hah, I could always use more testers. When the project is good enough for a somewhat-stable alpha I'll let you know, thanks!


Yes I quite agree with you, for established domains. It's interesting that only 3% of resolvers are parent-centric.

I was referring more to when registering a domain. To prevent the IPS resolver caching a non existent NS record for negative TTL.


The article suggests that both Google Public DNS and nominum are parent centric, which might be a significant portion of the 3% (or larger at this point).

These days with the number of resolvers that have fall-back catch-all records designed to redirect you to a search / suggest feature, I think that you also need to worry about positive TTLs.

You're right that if a domain is pristine, and has never been queried, that in all likelihood, you'll be able to have it resolvable within minutes, not hours, but this still seems like a relatively uncommon case.

In practice, people do query for their domain as its propagating, and do buy meaningful names that are likely to have some low-level background rate of queries, and there's not much to stop the legion of bots that are watching for whois updates either.

I guess I take the most issue with your headline. DNS taking 48+ hours to propagate is not a myth.


Problem is, both your link headline here and the premise headlined on your blog are flat wrong, and are going to give sysadmins everywhere headaches if clients come across your article and think they've learned something.

The RFC snippet quoted in this comments thread is the right approach: keep a long TTL in normal practice, shorten it at least double the TTL in advance of a change (e.g., if 2 day TTL, shorten it 4 days before changes), dropping it down to 3600 or 300 depending on your tastes, and bring it back up after the change is stabilized.

In the case of registering a brand new, never existed before, domain, avoiding cache poisoning can help.

But DNS taking up to (TTL x number of layers of cache) is not a myth. We routinely see 5 - 7 days (globally) on 1 and 2 day TTLs, and 2 - 3 days (globally) on 5 minute TTLs (thanks to ISPs with 1 day min TTLs).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: