Hacker Newsnew | past | comments | ask | show | jobs | submit | cbf's commentslogin

Some years ago I asked one of the IT guys at a company I worked at about this. They had a laborious checklist to go through to make sure that no internal or external web apps broke.

This sounds like a small job, but the number of things to check when dealing with apps built internally and externally over the course of a decade or more is not insignificant. Factor in the failure case being potentially hundreds of staff answering phone calls with "Sorry, our system is broken" and you can understand the conservatism towards upgrades.

In the time he'd been at the company (18 months) two upgrades had been held back, one due to a bad stylesheet in an internal app and a second due to an incompatibility with a third-party site that used client SSL certificates.


Is it safe to assume that every release is going to require binary XPCOM components to be rebuilt?


Yes. We make no guarantees whatsoever on binary compatibility between versions.


Okay, thanks - has bundling an NPAPI plugin with an extension got a future? (js-ctypes can't do what I want so it seems like the next best thing.)


Curious what you want to do... Maybe extend js-ctypes so that it'll do what you want?


I use an API that you pass a callback into and get an fd in response. You then monitor the fd and invoke a process function when there's something to read which results in your callback being fired with results. I haven't checked js-ctypes for some time so I could be wrong about this type of thing not being possible with it.

I figure if I'm going to do a rewrite I might as well go down the NPAPI path (via http://www.firebreath.org/) as then I can pick up Chrome.


But routing traffic via DNS change to different data center isn't the best approach as some ISPs cache DNS and would still send traffic to old data center.

All resolvers cache records for the TTL (or SOA minimum for negative lookups) specified by your authoritative servers.

A solution to this would be to have a round robin DNS to different data centers but that is still plagued if one data center develops a network issue.

If the clients are web browsers they will try the alternative addresses in a record set if the one they pick doesn't respond.


There is nothing wrong with a Linux distributions being focused on fulfilling server or development roles. That is not the role a Linux distribution like Ubuntu Desktop is attempting to fulfill however. Critiquing Gentoo or Arch or what not against the traditional desktop incumbents is unfair, Ubuntu Desktop, not so much.


The parent was not intimating what you inferred.


It doesn't help that there's a setting in Safari to automatically run "safe" downloads.

This feature has always struck me as a cordial invitation from Apple to trick their users via some means or other.


It used to be worse. Early versions of Safari would automatically execute a shell script as part of a "safe" download. They 'fixed' it by changing the disk image spec rather than the Safari feature. This type of exploit probably reflects how much thought was put into the feature.

The "safe download" social engineering attack was outlined years ago, so it's somewhat surprising it took this long to widely exploited.


>They 'fixed' it by changing the disk image spec rather than the Safari feature.

Which was the right thing to do, by the way. I don't want disk images running shell scripts when they are mounted manually OR automatically.


I found Steve Vinoski's comment on this exercise interesting:

http://steve.vinoski.net/blog/2011/05/09/erlang-web-server-b...


Erlang was not designed with performance in mind. It's only gotten reasonably performant in it's later life.


That's not quite fair. Erlang was designed from the start to be very, very fast at process switching and message passing. True, that design took a while to bear fruit, but my understanding (from listening to Joe Armstrong speak about it) is that being able to make the core of the language highly efficient was a goal from the start.


Trying to make it as efficient as possible is different from the purpose of the language being efficiency.


Indeed, but in this case both apply.

EDIT: Give this a listen, the early design and development is described in more detail than I'm capable of going into: http://www.se-radio.net/2008/03/episode-89-joe-armstrong-on-...


Indeed. I'd really like to see the Haskell servers, warp and snap, thrown in, since the infrastructure there was actually built for raw performance.


I have put together the code to implement the benchmark using Snap and would be happy to share with anyone willing to boot up the instance and run.

I might do so myself if I find the time tomorrow.


I don't see what is noteworthy about this. It's not a new site so why the interest?


I think perhaps because the WebCore for iOS 4.3.3 is in there. There was a "squeaky wheel" episode a week or so ago about Apple not releasing all of the source code for LGPL projects in a timely manner, this may be the "grease".


Apple still has not released the requisite source code for WebCore (they redact code from it for their WAK* classes that they are not allowed to by the license, and which stymies work on iOS WebCore modifications).


Who wrote the WAK* classes?


Given that these classes have been maintained at Apple over the course of multiple years, I'd imagine multiple engineers were involved in it's construction. If your intention is to track them down personally and harrass thrm, I'd rather this be taken up as a matter of corporate policy ;P. (Seriously, though: why does this matter?)


If they were written by Apple employees, then Apple likely owns the copyright on them, and the claim that Apple is required by some license to release their source is then in doubt.


I'm sorry, are you serious? Apple chose, of their own volition, to modify, compile, and then distribute a derivative version of WebCore, a project clearly licensed under the LGPL. In order to be compliant with this license, all new/modified code statically linked to the code they took must be cleanly separated from the rest of the binary in a fashion that would allow a working replacement to be compiled and used in its place. So, whether they want to cough up the code for WAK* or "just" give us un-linked clean object files so I can link their work together myself, as it stands they are simply violating this license, and the only thing "in doubt" is their right to use and modify WebCore.


Out of curiosity… Why would someone use a locally modified WebCore in iOS?


Normally, developers are able to get away with monkey-patching the compiled code in memory; but, in some cases, you simply need drastically different settings. A while back (important, as Apple now added this feature... at least I think) some developers managed to sort-of-almost recompile WebCore (with some horribly reverse engineered hack for the redacted parts) in order to add RTL (right-to-left text rendering) support: their goal was to get Hebrew better supported (a goal that was mostly successful, and was used by a large number of people in Israel, with nearly everyone in Israel using it).

Even when you take the monkey-patching route I generally prefer (as it allows you to more easily stack patches from multiple developers/vendors), modifications to a C++ library like WebCore are much much harder if you don't have the original source code. In my case, I add a new <script language=""> handler for "text/cycript", a dialect of JavaScript that allows inline Objective-C syntax (similar to Objective-J) that bridges to native code, so it can be used by people developing HTML desktop widgets.


I think the majority of PHP developers could handle Erlang. The core language is small, the syntax maybe at times quirky but it is quite clear, the documentation is good and the community is welcoming. There is perhaps some extra effort in starting a project but with rebar I don't think it's any more onerous than the hoop-jumping involved in using a PHP framework.

If there are any PHP developers reading this who have tried and failed to pickup Erlang I'd be interested to know what got in your way.


I think the question is why would they want to try Erlang. There isn't any compelling reason for them to, when most of what they(and most web developers) do is write CRUD apps.


I'm inclined to agree. Perhaps as expectations raise over time as to what a CRUD app should be capable of this situation will change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: