Hacker Newsnew | past | comments | ask | show | jobs | submit | cechner's commentslogin

yep - look up 'Hidden Markov Map Matching through Noise and Sparseness' by Krumm and Newson

it calculates the final probability by combining 'emission' probabilities (the probability that a GPS observation was on a particular road) by the 'transition' probability that if an observation was on a particular road at one point, what is the probability that it is now on this other road segment. By combining these two the final probability incorporates both the nearness of the GPS signals to the roads and the connectivity of the road network itself.

I've found the formulas applied in this paper are good in practice only if the GPS updates are relatively frequent


I've also found that it can be tricky to map "convoluted" routes that don't necessary go simply from point A to pointB.

If you don't mind me asking, roughly what frequency threshold have you found the algorithm to perform badly above, and are you aware of any algorithms or formulae which perform better in these situations?


it should be in seconds - the problem is that the paper assumes that the 'great circle' (straight line) distance between two points should be almost the same as the 'route' distance between those points, with an exponential probability distribution.

This means that if the path between two points is not simple (around a corner) the probability drops off very quickly. If the time between measurements is in minutes, this heuristic is pretty useless (and you should really use log-scale for your numbers!)

edit: this is actually shown in figure 8 of the paper where they explore different 'sampling periods'

edit 2: I have not explored other methods yet, but it would probably make sense to start by deriving the formula the way they do, by exploring ground-truth data.

edit 3: I just noticed that my comments are largely repeating what you're saying - sorry!


Ah, that rings a bell now. You can vary a parameter they call "beta" to allow for more convoluted routes, and I think a larger value gives a little leeway for less frequent fixes.

Agreed, the log scale is really important to avoid arithmetic underflow =] I believe OSRM and Graphhopper both do it that way. In my implementation I've flipped from thinking of measurement/transition "probabilities" to "disparities", and I choose the final route that has the least disparity from the trail. It seems to handle trails with around a 30-60s frequency over a 5-10hr period with decent accuracy.


actually, beta is less useful than that! I think it represents the median difference between the two distances, its not a tolerance (at least as far as I can recall after experimenting with tuning this value).

As with you, I have found that it still often gives ok results with slower frequencies, as long as the transition probabilities are still relatively in the same scale as each other for a particular observation pair. However it means that there's no point trying to 'tune' using the gamma and beta parameters


do you have more modern benchmarks that cover the same ground?


https://benchmarksgame-team.pages.debian.net/benchmarksgame/... is the current benchmark site.

The article links to http://shootout.alioth.debian.org/gp4/benchmark.php?test=all... (which is a dead link).

The earliest wayback machine for the link is https://web.archive.org/web/20060522132352/http://shootout.a... which puts Java at 1.7x C's speed and Lisp at 3.3. It does the same toy programs in the current implementation (rather than the 'hash access', 'reverse lines', 'array access' and 'list processing' benchmarks cited in the article)

The last 2018 crawl of the link is a 301 to https://benchmarksgame-team.pages.debian.net/benchmarksgame/ (the current site).

Note that the current site argues against the benchmarks cited on norvig. In https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

> The best choice of benchmarks to measure performance is real applications… Attempts at running programs that are much simpler than a real application have led to performance pitfalls. Examples include:

> Kernels, which are small, key pieces of real applications

> Toy programs, which are 100-line programs from beginning programming assignments…

> Synthetic benchmarks, which are small, fake programs invented to try to match the profile and behavior of real applications…

> All three are discredited today, usually because the compiler writer and architect can conspire to make the computer appear faster on these stand-in programs than on real applications.


> Note that the current site argues against the benchmarks cited on norvig.

The current site tells you some of the negatives of measuring tiny toy programs (like those shown on the current website) and tells you why in-spite-of-that we're going to measure some tiny toy programs.

As-you-noted the micro micro programs referenced by Norvig were replaced a decade ago.


what is the point of this headline, other than to decieve? it is not the headline of the article and is not a quote from the text in the article

edit: I hope I'm not just being a killjoy here - I actually thought this was going to be some kind of political article.


You're not the only one. I was half hoping for some discussion of how mathematical theory illuminates, say, the writings of Bakunin or Goldman.


When I first used Parallels years ago I was pretty surprised how close to native it could run Windows (though not 3D graphics intensive apps, as far as I can recall.)

I tried running it recently though and the performance was _abysmal_. Just unusable, taking 30 minutes just to boot into a Windows 10 machine. I can fully believe that the problem is with some major change in OSX or Windows or something, but I feel ripped off for having paid for it again.

(yes, I tried all the little hacks and tricks from the forums.)


> Developers do need to pick a set of core technologies, and stick to it or a career can fall apart I bet

I hope not for my sake :) Im currently contracting writing C++, but my previous contract was with the ASP.Net stack. Before that I was working on a lot of Java services (introduced Scala to good effect)

I also have started a side business helping small companies by making small apps helping with their everyday stuff - sometimes Ill make iPad apps with Swift, but more recently Qt/QML to make something that runs on both OSX and Windows desktops natively. I also made (very small!) web apps for them hosted on both Azure (MS stack) and AWS (linux stack)

After a while learning the core aspects of a new language doesn't take very long, and they generally have to address the same high-level concerns. That said, learning iOS development was a continual hassle but feels like it was worth it to have a native app that I can deploy easily using HockeyApp etc.

edit: I will undermine my whole point here by saying that I abandoned a personal Android project because the dev environment felt so ad-hoc and duct-taped. I spent a ridiculous amount of time trying to get the simulator to render GL properly without any luck (on Windows)


nah thats mitsu (as in mitsubishi)


Mitsubishi means "three water chestnuts" and explains the three-rhombus logo: https://en.wikipedia.org/wiki/Mitsubishi#History


GP is correct. It's right there in the wikipedia link.


holy moly, I have forgotten more than I realised :( embarrassing


Scrum proponents (a label I would tentatively apply to myself) would tell you that 'you're doing it wrong' but unfortunately a point-by-point reply to this article would detract from the general problem here: Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else

If youre working on a project where it is important that you have as-accurate-as-is-realistic an idea of the size of the project, or more specifically your progress through that project, then I can't see how a methodology could be any simpler.

If having a good idea of the size of your project over time and your progress through that project are not very important from a management perspective, the Scrum artefacts will seem like, and will probably in fact be, needless overhead.

Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.


> Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.

Scrum is actually part of the problem, IMO. I've seen many teams turn scrum into a hammer and treat all future problems as nails.

Example problem: The foobar story has failed failed for the third sprint in a row.

Likely discussed in retrospective (plausibly good ideas, mind you):

- We need to break down stories more before we estimate them.

- Or we need to stop underestimating foobar stories.

- Or we need to focus on unblocking subtasks related to foobar stories.

Probably unconsidered:

- The foobar code is a mess and needs to be refactored.

- Or the foobar subsystem is too coupled to the Fizzbuzz subsystem.

- Or the need for some developer tools to increase productivity in the foobar ecosystem.

Since scrum is methodology oriented, methodology is the first tool teams reach for when a problem is encountered. And I see this after team leads make it explicitly OK to discuss technical subjects in retrospectives.

I'm not a psychologist, so I can't describe why this phenomenon happens, but I see it regularly.


All of the items you listed under unconsidered should be brought up by the dev team. If the dev team is uncomfortable bringing them up, then that's probably a sign of friction between the dev team and management, which is really common.


I've routinely brought up all of the unconsidered comments in retrospectives. Retros are all about making sprints better, and talking about technical problems is integral to that.


>Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.

Pretty much every kind of deadline driven development ramps up technical debt. Scrum certainly isn't the worst in this respect (developers make their own deadlines, and conscientious ones will build the time in), but the emphasis on commitment and the pressure to deliver at the end of the sprint puts pressure on developers to cut corners.

The worst part though, is that the product owner is usually non-technical and will deprioritize stories to clean up technical debt as a result.

IMO for any kind of development methodology to work it must have an opinion on technical debt. Scrum doesn't.


Sprints are meant to be based on the previous sprints velocity, so any commitment should get smaller and smaller until you can do it without forcing it.


if pressure is ramping up and quality down the sprints arent serving their purpose.

One of the few defining characteristics of scrum is that the developers define how much they can achieve, and this estimation is improved over time. If this is not happening there is something else wrong with the culture and Scrum is being used as a scapegoat.


A few defining characteristics of scrum that lead to overly optimistic predictions:

* The prediction is made in a meeting while your head is "out of the code".

* The prediction is made in a group setting, rendering the decisions more easily subject to peer pressure and groupthink.

* The prediction is made up to 2/4 weeks in advance of actually doing the work.

* The prediction is made without risk of overshoot attached. Risk is critical metric which scrum conceals.

And the main defining characteristic of scrum that leads to pressure, after all of that unwarranted optimism:

* The prediction is designated as a commitment.


It sounds as though you objecting to being required to give any estimate at all.


I don't know how you manage to read this. He seems to say he would like to be in a situation where he has the means to give good estimate but scrum forbids it and forces to give random and biased estimations.


can we infer that he would like to give his estimates:

* while he is actually writing the code (so not up front)

* not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates

* (third point same as first, dont want to estimate up front)

* must incorporate what is often called 'contingency' (which is actually what the whole point of measuring velocity is for!)

* and the final point - he doesn't want to have to commit to it

how can you _not_ read this into it?


Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.

How is that the same as not being "required to give any estimate at all"?

> he doesn't want to have to commit to it

why not? an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.

I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.

Furthermore, this bullet point actually takes the quote out of context - He specifically doesn't want to commit to the estimate produced under the previous conditions, not that he won't commit to any estimate. The difference is choosing to commit to an estimate you have high confidence in, versus any estimate given automatically being a commitment (where estimates may be required on demand).


it is totally reasonable for stakeholders to want to track your progress through a project. If you have a good way of doing that then great, you should use that.

Scrum people believe that scrum is the simplest way of measuring that. But at some stage you have to estimate the constituent parts of the project in order to get an idea of its size, and for those estimates to be useful in tracking your progress you have to do it in advance.

I repeat however, if you dont need to do this then thats fantastic! Many of us do however, and some of us choose to use scrum to do that, and some of us have had a great deal of success with that.

(edit: I worry that this sounds condescending. I am just trying to keep the tone friendly)


> for those estimates to be useful in tracking your progress you have to do it in advance

In advance of what? The only constraint on a useful estimate is that is comes before the task is finished - it needn't be considered as credible at the earliest possible time.

Also, your response doesn't really address my post..


(I went to bed so didn't take long to reply before)

I am clearly not expressing myself well. I am talking about a situation where some stakeholders are expecting a complete picture of roughly how large the project is and would like to be able to track how far your team is through this project on a regular basis.

I am putting scrum forward as a methodology for, in as short a time as possible, measuring the size of that project in a meaningful way by merely breaking it up into as small pieces as possible and attaching numbers to those pieces, intended to measure the size of each piece relative to the other pieces, and then over time discovering how long it takes to complete a piece of a given size.

> Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.

The situation I outlined above (the time when scrum helps out) requires you have a stab at estimating all the constituent parts of the project at the beginning of the project.

> an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.

True, but the point of estimating in scrum is to assign relative sizes to the pieces of work, not a number of hours, so this isnt a commitment to finish at a specific time but just to say 'I think this is one of the larger pieces of work in this project.' The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.

And in case you suggest that estimating without ascribing a time value is not meaningful, it is used to track how far you are through the project, and over time you refine what the finishing date will be given the emerging velocity.

> I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.

The analysis comes in discovering the velocity. The expectations evolve over time. But knowing your velocity is of limited use if you dont have an estimate of the overall size of the project.

> The difference is choosing to commit to an estimate you have high confidence in

This is the method for getting confidence in your estimate. You have an overall number of 'points' in the project and you learn how many points you can tackle on average every X weeks.


>The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.

Every time you try and infer what I'm "really" saying or what "really" happened to me you get it completely wrong. Next time you do that just assume that you're wrong, it'll save us both time.

The blame/finger pointing on my projects wasn't really external (although in a different environment it certainly could have been). Developers themselves felt bad about missing their 'commitments'. The pressure/blame was largely self-inflicted.

Despite feeling bad the predictions were still consistently optimistic and still consistently wrong due to the environment the predictions were made in. It was a bug in the scrum process that led this to happen, but the team and management (and you, apparently) would rather assign blame to anything else other than a bug in their methodology.

>The analysis comes in discovering the velocity.

Velocity isn't a useful metric.

>This is the method for getting confidence in your estimate.

Except it doesn't work. It didn't work for us and it probably doesn't work for anybody else.

Confidence in estimates means treating risk and uncertainty as if it is real rather than sweeping it under the carpet, like it is in scrum.

Confidence means a prediction process that doesn't make developers feel guilty about being wrong, like it does with scrum 'commitments'.

Confidence a prediction process that doesn't intentionally subject developers to groupthink and peer pressure by immediately putting them on the spot like scrum planning pt 2 does.

Confidence means that your estimation process itself should be mutable. Under scrum it is fixed and not subject to review (if you change it you're doing "Scrum-but" and that's a sin, according to scrum trainers).

Most of all, confidence means that you should be able to inject technical debt cleanup stories into the sprint that derisk future changes. Scrum says that's only allowed if the PO says it's allowed. The PO is not responsible for missed commitments though, so it's not their problem.


>* while he is actually writing the code

Yes. I can take time out to answer email. I can take time out to make estimates as soon as I get an estimate request. Doesn't have to be done in a meeting.

>(so not up front)

What the fuck is the point of an estimate that's not made in advance???

>not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates

The latter. Is that a problem?

>(third point same as first, dont want to estimate up front)

"Not up front" is not the same thing as "not 4 weeks in advance". I'd do it as soon as the PM needed it to do prioritization.

>must incorporate what is often called 'contingency'

If you think risk and contingency are the same thing you're an idiot. Risk is story A (e.g. upgrading dependencies) might take 0 hours or might take 4 weeks while story B (updating translations) is going to take 1.5 hours and it's really only going to take 1.5 hours.

Contingency is (for example) "let's make sure we have 4 weeks spare before doing story A".

>(which is actually what the whole point of measuring velocity is for!)

No, velocity is about measuring how fast you're doing stories.

>and the final point - he doesn't want to have to commit to it

Yeah, because as soon as you start assigning blame for missing feature deadlines the technical debt dial gets ramped up to 11 and predictions become an exercise not in being accurate but in CYA.

An estimate about how long something is going to take can be wrong for many reasons that aren't the developers fault - bugs in libraries, technical debt in dependencies, technical debt they weren't aware of and didn't create, team members disappearing, etc.

If you want developers to commit to things make sure it's things that they have full control over.


The tone of this post is uncivil, e.g. "If you think ... you're an idiot."


(replying here because I guess we've reached the maximum depth)

I am here assuming that you want to be able to try to measure your progress through the project (as I mentioned, this is the only thing scrum does for you). Both of you seem to be suggesting (dont insult me if Im wrong) that this isnt the highest priority.

And no, velocity is to make the whole system self-adjusting. If I put 3 points against a story we use velocity to discover over time how long those 3 points take. This self-adjusts to incorporate for contingency.

If you disagree with this then we simply disagree on what velocity is about. It doesnt make us enemies, we dont need to get super pissed off at each other.


I've seen the "you're doing it wrong" argument so many times (I applied it myself a few times).

Scrum is complex and not always possible to follow exactly, so this is to be expected but it makes me wonder, how many successful projects are out there that are following the true Scrum methodology?

My guess is that it's a few more than the classic waterfall but I still seem to see far more failure than success stories.


The very idea of a one-size-fits-all process is unrealistic IMO. Something will always be customised in practice.

Regarding success stories, it might be that process doesn't play such a critical role as long as solid engineering techniques are used and the team is competent.


If your team is competent and solid engineering techniques are being used, you already have a well working process. Forcing any methodology on this will likely result in a deterioration.

All those methodologies are for the less stellar programming teams, to get consistent results from those (also to a lesser degree to make good and bad programmers work well along each other). Because you can't always get the best programmers.

If Scrum would only work well with good programmers, it would be next to useless.


Successful big waterfall engineering projects where waterfall is actually applied exist. Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".

It remains to be seen if big Scrum engineering projects where Scrum is actually applied even exist. I can't even think about one on the top of my head. I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not. And it's yet another story to judge if they are successful or not.

In the end it does not matter much. The theoretical vision that nobody ever uses has almost no interest if you are concerned with real world efficiencies.


> Successful big waterfall engineering projects where waterfall is actually applied exist.

You are engaging in equivocation.

> Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".

Nor are you going to use the software development methodology described as the waterfall method (you may use a physical engineering methodology that was among the inspirations for that software development methodology, but those are distinctly different things, with different specific practices, and different domains.)

> I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not.

Scrum is exquisitely well-defined, both as to what it involves, what it specifically excludes, and what it is neutral to, in the Scrum Guide. (There's lots of confusion between Agile, a broad approach which is not a specific methodology, and Scrum, a very-specifically-defined -- though by itself fairly incomplete, in that any implementation of Scrum needs lots of decisions on the things to which Scrum is neutral -- methodology.)


Ok I maybe went a little far for the bridge, but today a microprocessor is way more similar to software than it is to a bridge (at least in some of the design phase, but then now even in some maintenance phases). And a modern rocket also contains tons of software. And waterfall is similar enough to (at least non-software -- but in my thesis also software) engineering to even consider a direct equivalent for the bridge. Only, quite like a description of a method is often not enough to see how it is properly used, the mythical "waterfall" where a phase begins after the other one, strictly, never happens, and there are all kind of loopbacks -- even for the bridge -- and obviously if you try to remove loopback things will get fucked-up, but why would you try to do that? Now in real world conversations, waterfall is used to designate software being developed with proper general engineering practices.

Scrum origin is partly in manufacturing. Now there are some common points between some aspects of software dev and manufacturing, especially more so if the software being developed can be iterated very quickly (but very few if it's not the case), but at least in the real world (and maybe even in the theory) Scrum is what is also actually mainly used to interact with other stakeholders. And given how the communication is performed, and its content, that might be better than complete chaos when nobody is actually able to do the work they are supposed to do (PM being limited to having vague ideas, lack of a truly competent tech lead doing actual tech lead work, lack of vision by management, and so over) and only very vague general ideas of what should do the software -- or more generally the whole product -- are ever emitted.

As soon as "serious" stuff starts to be involved, you need real boring engineering, with functional analysis, requirement engineering, modeling, systematic testing or even partial proofs, etc. And you need to have it structuring communication between teams, and day to day work. And then, I don't expect Scrum or anything Agile in such a context adding any kind of value.

Now the theory of Agile and Scrum has evolved because of criticisms to a point where we are told that it actually do not cover the things that matter. That is bullshit retro-justification, now that the world is fucked up trying to make sense how to use that. Here is the Agile manifesto:

> We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

> Individuals and interactions over processes and tools

> Working software over comprehensive documentation

> Customer collaboration over contract negotiation

> Responding to change over following a plan

> That is, while there is value in the items on the right, we value the items on the left more.

Engineering is mainly about "processes and tools", of course "individuals and interactions" are also needed, but there is no need to oppose them (although I am not sure what is the point of "individuals" here; the authors might as well said "oh and by the way be nice")

"comprehensive documentation" is critical in all kind of domains, and now that software is everywhere it just makes no sense to declare your "preference" of "working software" above "comprehensive documentation". It is, again, even dangerous to oppose them.

Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.

"Following a plan" is what you do about how you organize your work when you use Scrum. There is no problem in studying the impact of a change any time if proper engineering practices are used. Obviously, the cost can vary depending on various factors.

My conclusion about Agile and Scrum, is that if you prefer all of that (4 Agile preferences, and the Scrum theater), you should seek projects that are suitable for the Agile preferences, and so poorly defined that Scrum is a plus. On my side, I'm just not seeking to work on chaotic projects -- on the contrary I try to bring logical and more systematic practice where I feel that chaos reigns -- and I'm neutral about Agile preferences, I prefer to choose projects on other criteria (mostly; intrinsical interest)


> Engineering is mainly about "processes and tools"

And Agile does not avoid processes and tools, it recognizes that process and tools must be specifically fit to the particular team and context of work (Scrum, particularly, is a baseline set of processes and tools that is designed to serve as a framework for common contexts of software work -- its intentionally incomplete to avoid specifying too much that would narrow its scope of applicability.)

> "individuals and interactions" are also needed, but there is no need to oppose them

The need to oppose them comes from the authors' concrete experiences in the software world before writing the manifesto, where very frequently canned (often consultant-pushed) processes and tools were being adopted by management in shops without considering the dynamics of the existing team and the particular work being done. (One of the sad ironies of the Agile movement is that the "Agile" banner itself has become a tool for the same kind of thing.)

> "comprehensive documentation" is critical in all kind of domains

Yes, it is; the preference stated in the manifesto is, again, the result of concrete experience where projects were quite often focused on producing mandated documentary artifacts because there was a checklist and that was how "control" was exercised, but the documents required and delivered were often irrelevant to (and not consumed by, or updated to reflect changes resulting from, the process of) delivering working software.

> Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.

This is intended specifically in the content of developing specific software requirements (and, really, its more about the dev team pushing the customer to engage rather than provide hands-off requirements.)

The Agile Manifesto really deals with concrete problems encountered in particularly enterprise software contracting (but bad practices from the enterprise world were, at the time, getting exported to the rest of software development, so not limited to the enterprise world.)

> "Following a plan" is what you do about how you organize your work when you use Scrum.

Scrum, like most methodologies that attempt to implement agile values, focuses quite a lot on managing potential rapid change within the plan.


Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools (except those of Scrum, regardless of their applicability -- which is not the majority of projects, far from it) and let idiotic work continue to be done, now that we have a noun for it. This is not better than the previous situation. Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto. And again, in too many actual implementations, Working software is not really an output of Agile processes... except now you don't even have a doc anymore. Actually, to get non trivial "Working software", a good documentation is essential. You don't solve anything by casting that you prefer "Working software", especially more so when you are trying to fix a situation where the documentation is mandatory but poor. And guess what, the "client" also want "Working software"...

Scrum is what you do when you try to do software engineering without actually doing software engineering. It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately). I prefer to stick to the real thing, and core engineering practices. Scrum attempts to fix situation when core engineering practices are misunderstood and used as constraints instead of being used as something essential to the dev of a good product; but it is vain to try to fix such a situation by engaging key people even less in core engineering practices, and more in mundane discussions where the real problems are never addressed.


> Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools

Oh, yeah, that's definitely a problem. I don't think the Agile Manifesto is bad at all, but I think that, ironically, in application it suffers from the same problem it sought to address -- people are looking for simple answers that can be applied without deep knowledge of context. The Agile Manifesto and Agile software movement was itself a strong reaction against that, but unfortunately it (and tools from within that movement, like Scrum) get applied by exactly the same process that the Manifesto was a reaction against (focusing on particular ways it had manifested, prior to the Manifesto, in software development.)

> Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto.

Absolutely; the real audience of the Agile Manifesto is software development practitioners that have influence with management, and its not really "new knowledge" as a concrete distillation of experience. The fundamental problem, I think, with Agile isn't that its ideas are bad, its that the real problem it deals with isn't a problem of process/tools, or even the meta-level approach to processes and tools, but a problem with institutional organization and leadership of large entities that happen to be doing software projects, and how that manifests in software projects.

The agile movement has produced some new tools that can be applied effectively in, largely, the areas that didn't really have the worst cases of the problems that motivated the movement -- because its helped motivate and inspire a lot of efforts by people with decent engineering backgrounds at finding new ways of working.

But the kinds of organizations that were worst afflicted by the problems that the Manifesto set out to address are still the most afflicted by those problems, and what they've gotten out of it is a lot of new processes and tools that consultants will sell them, their management will blindly adopt without understanding the conditions which makes them useful, and thus they find all kinds of new ways to fail.

> Scrum is what you do when you try to do software engineering without actually doing software engineering.

Scrum is largely orthogonal to software engineering (presumably, people using scrum in a software project will be doing software engineering within Scrum, but Scrum is not about software engineering.)

> It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately).

Scrum is designed to be very meta, true. And, yes, if you mistake Scrum for a complete process rather than a process framework, you aren't going to get much out of it beyond omphaloskepsis. (I'm actually not convinced that Scrum is particularly valuable, even as a framework, as anything more than a well-known starting point to develop an appropriate, context-specific work model.)


I agree that Scrum is dead simple, but that doesn't mean it delivers sensible estimates, or allows you to get somewhere with less effort than some other methodology. You might end up doing more (and worse) work because Scrum is trying to be too simple and linear, which I argue is the case in the post. But it's simple, I definitely agree.

Regarding development: My main point is that Scrum leans towards agile methods such as XP (testing, CI etc), but it also sucks the time necessary to do those things well. The time Scrum takes off of the devs' working hours could much better be spent on those.


>>> Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else

There's slightly more to it than that: it also encodes an assumption that you're working with a single fairly-tightly-integrated group (with synchronisation points at least daily). It's possible that this helps with estimation and scheduling -- it's a lot less clear that it helps get the best outcome in other respects.


I agree, it is often not the best approach. But many situations demand a well defined approach to estimation and although the OP tried to preempt this, he didnt provide an alternative


I reckon most experienced coders can cope with estimation when it's justified (i.e. "can we realistically get this done before <specific, real and externally-imposed, deadline>? And if not, is there a useful subset we can manage?"). The bigger problems come when estimation isn't about keeping promises, but rather a part of some form of scientific management aimed at "getting velocity up".

There's also something of an uncertainty principle here -- more precision of estimation is possible, at the expense of increased expected timescales (partly due to padding, partly due to picking lower-risk approaches).


if its being used to 'get velocity up' instead of measuring velocity then its not being done right.

I personally think estimating projects is one of the most difficult things about this industry. Especially if we're talking about delivering many calendar-months worth of effort for a team , unless its just a variant on some other project[s] the team is well experienced at


oh and I have attended some of those expensive Scrum 1-week courses and saw the darker side of that community - it definitely has a cult following that give it a bad name, but I've been to similar conventions around design patterns, object-oriented and (to a lesser degree) functional programming so I think that the community problem is not particular to Scrum.


Those with the loudest voices simply have the loudest voices, be they right or wrong.


The problem is communities.


interesting. Do you have an alternative in mind?


"Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else."

More like wandering in the desert, hoping you find the promised land.

Been thru scrum master training 3 times, been on many "agile" teams. I've never heard this rationalization. Rather, a common justification for "agile" was you always have a working product. Which might be nice if things worked out that way.

Also, PMI style critical path worked just fine for figuring out that "straight line".

Scrum and "agile" democratized project management, empowering every poseur to claim expertise and ability. Whereas PMI required real effort to learn and master, Scrum flavored "self help" books can be flipped thru before you finish your coffee and then safely stored in plain sight on a book shelf, never to be touched again, allowing said poseur to claim the daily mutant chaotic dysfunctional mismanagement that they've always done is now "agile".


If you're objecting to people who treat scrum (or any project management tool) as a one-stop-shop that will cure all ills I agree with you, but nobody here is saying that.

If you are objecting to defining the scope as small tasks and measuring your progress through that over time, then continually re-evaluating this scope as requirements change, then I think you are not working in an environment that would benefit from this kind of tool.

Its just a pragmatic set of guidelines, and objecting to it with such ridiculous vitriol makes you sound as foolish as the people I think you're objecting to.


My goal is to ship products that people will buy and use. Scrum and "agile" has only been an impediment.

"with such ridiculous vitriol"

Emperor, little boy, no clothes. It's thankless work.

In opposition, defenders of Scrum et al use the No True Scotsman's fallacy. Because those of us who have tried and failed are just morons.


Considering failure rate of PMI led projects is even higher then agile projects for software. I really wouldn't hold that up that as the way to go.


PMI (critical path) != waterfall. But then that's also said of "agile", which too often devolve to waterfall.

Project management is risk mitigation. In my experience, most "agile" projects have been risk amplifiers. Ironic.


curious - I remember when modules were first proposed I thought Apple was heading up the initiative, based on their work with Objective-C. I remember reading slides someone at Apple prepared describing the likely syntax and everything...


That is, more or less, the current clang proposal.


Ill reply to the root of all your comments, but this comment by you below sums up the problem:

> If your software has clear requirements, has a point when it is done, and only requires minimum maintenance after that, you aren’t writing agile.

This is simply not true. All 'agile' projects Ive worked on have had a complete-as-possbile analysis phase where we figured out the scope and the domain up front. This is not anti-agile at all, but is necessary on any project anywhere you are working on. (Agile is largely about avoiding 'big design up front', not 'big analysis up front'. There is a massive difference between analysis and design.)

Agile is about changing your plan when the _requirements_ change. Your API or whatever should change if your requirements change no matter what methodology you are using. But with waterfall you will not be able to and you will end up with a useless API.


The issue is that there are projects where changing your API is impossible, which means that using Agile is often a hopeless concept. Because if huge insurances already depend on your API, no matter how Agile you are, you can’t change it anymore.

And there are many cases where your code will be frozen at one point. Even if the requirements change.

Especially for Internet-of-Thing devices this can be very problematic, as no one is going to ever update them.


I dont really understand this. "NF_REQ_00: API must not change"

Add verification tests to ensure API remains as documented. Every time someone checks in code your tests are run, break if something changes.

Every project has functional and non functional requirements, you write tests for them, your project is in a failure state if the tests are not passing.


How do you update the code on a Smart Washing Machine? A Smart Pacemaker?

In the world where everything is digital, we’ll have a huge technical debt of un-updatable software.


> The lifetime for a washing machine is 30 years. Your software on that will last 30 years. Using a development method designed to make quickly changing requirements easy is stupid when your code will be "write once, never change".

If the requirements change, you must update your software. How does waterfall handle changing requirements? It doesnt.

To repeat: agile _handles_ changing requirements. It doesnt _provoke_ changing requirements.

As someone who spent the first 6 years of my career doing waterfall, the only way to combat this is by treating the functional requirements as an immutable contract between you and the customer. When the requirements change you blame them, and refuse to change your software. Then your software never gets used.


> If the requirements change, you must update your software. How does waterfall handle changing requirements? It doesnt.

How does agile handle change? By assuming everyone will update and has no issue with all third-party accessories constantly breaking.

If you can’t ever change your code after you’ve written it once, then Agile isn’t useful to you.


You seem really hung up on this idea of clients having to update code that is already out in the wild when requirements change.

Maybe I've had a twisted experience of Agile, but it seems like its most useful when you are working on a greenfield project, with a customer that maybe doesn't even know what they want, but they know they want something. So you get some requirements, and build a prototype. Then you show it off, and they make some comments, (generally like: "Can it run in the Cloud? Is it Social?", "Could that icon be more of a cornflower-blue?"). Sometimes you get useful feedback also... Then you go back and refine the prototype. Rinse and repeat, and eventually it does enough that they are happy.

Even when you do know "exactly" what you are building, I would still prefer starting from a bare-bones version, and building out from there. Unless you've built this exact thing before, planning it all out beforehand will invariably miss some Rumsfeldian unknown-unknown technical detail, which might tip up the whole apple-cart of the carefully, laboriously, expensively, laid-out plan.

It's not like its 1995, and releasing a software version involved burning thousands of CDs, printing manuals, boxing it up, and distributing the boxes to retail stores. And most of us are not launching the software we write out beyond low-earth orbit.


you mean firmware updates? What is the problem you are talking about?

Plus its a regular problem ensuring that an API is stable. People do it all the time - aren't you wondering why you are the only person arguing this point? Many people in this thread deal with these problems on a daily basis...


The problem is that no one is going to do firmware updates on their smart washing machine.

Your software can’t be updated. You have one try to do it right.

It’s what next to all of the modern startups don’t get right. They build fancy software, but then in a few years your smart house doesn’t work anymore because the services it connected to have changed APIs and the house itself encountered a few bugs?

The lifetime for a washing machine is 30 years. Your software on that will last 30 years. Using a development method designed to make quickly changing requirements easy is stupid when your code will be "write once, never change".


You are buying much higher quality washing machines than I am, apparently. If you get 8-10 years out of most brands now, you're doing quite well. Planned obsolescence is just the best...


Mostly quality Miele machines. They have 10 years warranty from the manufacturer, so most actually survive for 20 years. Not what they used to make – they used to survive a lifetime – but it’s okay.

Same with stoves or fridges.

We already see how hard it is to keep mobile devices updated. Android is the nightmare example, but even Apple drops devices after 4 years. In 20 years, your smart fridge will have tons of malware on it if it’s connected to the internet. If it isn’t connected, you won’t be able to get updates, so the software has to be perfect.

And the point we made in Uni was that agile is suited for situations where your requirements change after deploying. In all other cases you can do waterfall – provided you actually find out what you’re supposed to do – better.


I'll have to look into that brand the next time I'm in the market - I've been hearing a lot about them on here lately.

Personally, I want my appliances 100% electromechanical. I don't really get the whole IoT buzz. Besides simple reliability and repairability, I have no desire to control my stove or washing machine with my phone. I can get up and punch a button.


Then you’re the perfect customer for the German market! Next to no modern hip bullshit. Next to no IoT, next to no "Agile", or "moving target", or "constant updates", instead technology like it used to be – buy once, use a lifetime.

Warning: Expensive. Really expensive. Their washing machines start at 1300 USD.


If you are writing integrations for API features that are fantasy at this point, you are digging your own grave. You have extended your risk and when you find out six months from now that you bet the farm on a useless feature, oh well; thems the breaks.


I don't really agree with this - Scrum isn't really easy to do but in many situations it is the simplest approach to take.

If you need to monitor your progress in 'real' terms (i.e., whats actually completed) Scrum is pretty much the minimum ceremony you can get away with in my experience.

If you don't need to do that, say if you don't have a deadline that you need to know you wont hit ASAP, then Scrum is likely not a good fit.

> It’s boring, its old, it leads to pointless meetings run by people who don’t write software, who don’t understand the technical process behind writing software and don’t always care

I dont think these people are meant to be running scrums.

> Adhere to the sprint commitment, even if you have to work over time.

Yeah, and you will be if you dont adjust your commitments as soon as you realise youre not going to meet them.

> Agile is about moving fast, building working software today and delivering with changing requirements

Yes it is. And scrum adds time tracking on top of that.

It sounds like you dont really know what the purpose of scrum is. And this is fair enough, because I dont think they had a very good explanatory service - it was pushed as the 'next big thing' like XML and SOAP was in the 90s.


It sounds like you dont really know what the purpose of scrum is.

Unfortunately, every response to "we tried process/technique X and it failed" in this space is ultimately "you didn't understand it" or "you did it wrong".

So no true scrum would have...

No true Agile shop would have...

And where does that get us? It appears that the success rate of people understanding and implementing these things is very close to 0%. Maybe the problem is with the processes/techniques after all.


It appears that the success rate of people understanding and implementing these things is very close to 0%

I don't think this is the case. You're likely only hearing about the failures.

Here's a counterpoint – we implemented Scrum. It didn't cause any real issues, increased the speed with which we delivered features to customers, and increased the visibility of everything that was happening both within the development team and within the wider business.

Part of that was having a dedicated individual responsible for managing and implementing that process who made sure that it was actually achieving things we wanted to, rather than being a tick-box exercise, and who isn't afraid to make modifications such that the process better reflects the needs of the team.

In fact, I'd argue that the problem is completely the opposite to what you imply – it's not that people don't implement 'true' agile or 'true' scrum, but that they try to do so, rather than using these systems as a basis for building a development process that works for their team. No true agile team is slavishly adherent to rules that don't work.


You only hear about the people struggling with agile because the people not struggling are busy working on stuff.


well allow me to put a note here as someone who uses it successfully regularly.

Also I explain several times why he doesn't seem to understand it - having non-technical people running his scrums, working overtime and screwing up their velocity measurements, not being able to cope with changing requirements. These are things that scrum was built to _directly address_.

I say again, if you're criticising scrum as being too onerous, you are probably either using it wrong or using it where you shouldnt (where you dont need to track your progress.)


At the same time, many stories have been of people who have done it horribly wrong, to the point where you really cannot say anything but, "You did it wrong."


edit: I just noticed they said 'sprint commitment', and that they should adhere to it even if it runs overtime.

This is probably the most wrong part of the article - you get done what you can, and let it affect your velocity. It is this velocity that is the central indicator of how much you can realistically achieve


I read "adhere to the sprint commitment" as being an undesirable aspect of Scrum that the author was complaining about, not something that Scrum is lacking.

It's certainly one of the bad parts of Scrum as commonly interpreted; c.f. http://www.peterkretzman.com/wp-content/uploads/2014/10/Aski....

(That's why newer docs tend to substitute "commitment" with "forecast", to clarify the Manifesto author's intent).


interesting - I recently heard someone call it a forecast and thought 'man why don't they use that term in scrum?'

Nice to know that they actually do (I havent read up on it in some time, probably should)


I agree with you. "Scrum is the new waterfall" is true in this case in that both have been built as total strawmen.

Scrum says nothing about unit tests, nor does it require sticking to the commitment (although it used to)

> I dont think they had a very good explanatory service

The most recent documentation on "Core" scrum is very simple and easy to parse.


Yup, this guy sets up a total strawman which doesn't accurately describe how I've seen Scrum run at two major telecom software companies.

We change mid-sprint (when we need to), we don't have non-programmers running scrums or writing user stories and if you don't complete a story in a sprint then you roll it over to the next one.

If you have deadlines and contractual commitments to meet his suggestion of "use Kanban" strikes me as somewhat absurd. Running a team off of a Kanban board has its place (some of our support / platform teams use it) but I'm pretty sure your business owners want a better answer for when something is going to be delivered than 'when my Kanban board is empty'.


This isn't a straw man. This is exactly how Scrum gets implemented when management is running the process and not the development team.


+1


What it delivered soon? Put it in the top of the Kanban board and set a rule that the stuff in the top of the defined list gets worked first.


Having worked on a team using Scrum, I agree with this: it's pretty much the least process you can have while still fairly accurately predicting when work will get done and when any given feature will show up in the product.

But it can be done well or badly: the team I worked on rigorously adhered to the process, and only cautiously diverged from it after careful consideration. Scrum done badly will rapidly become micromanagement instead.


I feel like every time I complain about scrum and my frustrations with it to people who believe in it, the response is always "well you just don't understand scrum," or, "you're just not doing scrum correctly." If the process is really as good as it is supposed to be and espoused to be by scrum evangelists (including many of the coaches my companies have hired to help us implement it), then I wonder why it's so hard to do correctly.

I have worked with great engineering teams that have hummed along in informal processes that were similar to Kanban (though not defined as such). As soon as we were forced to switch to scrum, out productivity absolutely tanked, and we couldn't get the things we needed done because we were either in planning meetings all the time or we had to spend longer deciding whether to change our priorities mid-sprint as things would come up or lessons were learned, because that meant a lot of time and energy spent communicating up the chain why things did not get finished or the priorities changed.


> If the process is really as good as it is supposed to be and espoused to be by scrum evangelists (including many of the coaches my companies have hired to help us implement it), then I wonder why it's so hard to do correctly.

I'm quite skeptical of "agile coaches" and similar.

But in general, I'd suggest that it's easy to break by either swapping out or eliminating one of the defined roles, having someone unsuitable for those roles doing them (e.g. a manager or program manager serving as "scrum master", which instantly turns scrum into micromanagement), having an excessive number of mid-sprint changes (if changing a sprint's work mid-sprint happens every other sprint, something is very wrong), or not having any acceptance from the broader organization for getting work done at a regular cadence without constant "emergency interrupts".

You should not be spending any significant fraction of your time in planning meetings; those occur once a sprint at most, and the sprint-planning portion should mostly consist of quickly doublechecking priorities and grabbing the top stories by priority. The other significant effort lies in breaking down and "sizing" stories (which is the job of the development team, not the person providing requirements).

I do agree that Kanban can work as well; it just doesn't (in my opinion) have good predictive power for when work will get done. (On the other hand, some of our teams ended up switching to Kanban because Scrum doesn't work at all with a distributed team.)


> It sounds like you dont really know what the purpose of scrum is.

Agreed. People lose sight of this being a "framework" far too often. Make it fit your needs and run with it.


I think we can just summarize the entire thing now and forever as "Management leading developer meetings always leads to tears".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: