I've been using Python 3.4 (and now 3.5) for development and couldn't been any happier. The async features, specifically, are amazing.
When starting a new project from scratch, I see no reason NOT to use python 3. Maybe if a few libs you depend heavily on are still not ported.
Even in that case, though, it might be a good idea to start on Python 3. First because most libraries are already ported[1], and second, you could try porting that specific lib, or maybe running it as a separate service using python 2.
I guess the problem remains about maintaining big, Python 2-based programs.
I'm having the exact same experience! I just started using Python 3.4 (soon 3.5) for a new, major project and I'm pretty excited about it. I expected to find a bunch of roadblocks -- like being in a parallel universe or something. But, well, everything totally works!
The only package I found that didn't work with our Python 3 infrastructure was Fabric, but that's really easy to spin out (and will be fixed soon).
There's really no reason to not use Python 3 for (almost) all new projects at this point. It's exciting! I expect engineers who're used to developing in Python 3 will start to feel embarrassed that they're still running 2, and we'll port over all of those internal codebases (like at Dropbox :P).
How did you replace Fabric? I've hacked things together with some branches of Fabric + devs that just about work on Python 3, but it's not really satisfactory.
I replaced it with Ansible when I was doing a 2 to 3 transition about a year ago. It wasn't easy, but learning a real infrastructure automation tool (which Fabric is not) was worth it.
I thought Ansible wasn't ported to Python 3 [0] and actually had a lot of weird bugs when I last tried to use it. It is one of the few reasons I keep Python 2 installed on my laptop.
You rarely have straight dependencies on fabric from your app though. Personally I simply treat it as a command line tool, and `pip install` it system wide. It just happened to have been written in Python 2, and to require script files written in Python 2, but it's orthogonal to the apps/scripts I write with Python 3. Maybe your use case is different.
Moved a rather big, produciton running, project (100000 LOC) recently. Was not easy but now I've this very warm, conforting feeling, that if something happens in the future, I'll be ready for it :-) (now considering the adoption rate of P3, I'd say the future is, well, rather distant :-))
For me unicode was the reason to switch to 3, not everybody lives in an english-only world :-)
There are efforts being made to port Pygame as you mention but there's also the option of not using Pygame. I've written small games and OpenGL demos using PySDL2 and PyOpenGL in Python 3. It works rather well.
IIRC, I recall finding a post claiming it was stable enough for production use. Another post I found just now[1] suggests I am probably on the right lines.
PyQt is very much usable. It's GPL3 but if you're using it in commercial projects, you shouldn't just pinch your nose because you have to get a commercial license.
I'm not sure I'd pinch my nose over the cost of PyQt, but the cost of PyQt is pretty negligible.
The problem is the cost of Qt itself for commercial development - unless I'm misreading the Qt website I might choke and drown on the cost of a developer license for Qt ("1x Qt for Application Development ($350/Interval)") and (" This subscription is automatically invoiced at 1 months Intervals").
Maybe it's badly-written English confusing me, but that (and other info on their site) sounds suspiciously like "Developing commercially for Qt and being able to do maintenance releases requires a $350/month developer license." Oh, and their licensing FAQ specifically says that you CANNOT start on the LGPL version and change to the commercial license without prior permission from them.
Maybe it's just me, but if I'm going to be expected to spend $4200/year/developer (or "contact us" for a perpetual agreement), maybe I'd be better off spending a little money first checking out the other alternatives.
> PyQt is very much usable. It's GPL3 but if you're using it in commercial projects, you shouldn't just pinch your nose because you have to get a commercial license.
But I do and thousands of other people do as well. Today, the value-add is somewhere else, not in language bindings for GUI toolkits.
No offense to whatever it is you work on, but if you are a commercial project and:
- Can't comply with GPL3 and make your code compatible with the license
- Can't afford to pay for a commercial license
- Can't use a different language, such as C++, which has alternate offers
Then the problem is on your side, because either your business model is wrong or you're simply being cheap. You're not entitled to a free GUI toolkit for no good reason.
For 350 GBP (=500 EUR), you can have Intellij Idea.
When you ask your boss for a budget for Nx500 (N=number of developers), it is a huge difference, if you ask that for language bindings for GUI toolkit, or IDE.
The language and GUI toolkits are both free, the glue between them is not, and that damages them both. How many commercial PyQt apps have you seen in the wild?
Exactly.
Imagine, if someone asked money for XAML for C#... or for JavaFX. What it would do with their adaptation in projects?
Yeah, activity picked up a week or two ago; someone has done most of the legwork to get the bindings auto-generated, and now there's the clean up to do.
That's very nice to hear. I was seriously beginning to fear that PySide was dead in the water.
I'd love to have a decent GUI toolkit for Python I could use with Apache or MIT code (without having to isolate the GPL bits), but I was nervous about being stuck on Qt4, so I was never terribly sure about PySide.
I can think of one reason for using Python 2 -- if you are creating a project that you want to share with others, there are a lot of enterprise systems that are still on Python 2.x. I've had the experience a number of times of finding a program that seems to do exactly what I want, but I can't deploy it on my production servers because they are all running RHEL 6 with Python 2.6.
Of course, it is possible to install multiple versions side-by-side, and edit the scripts so they all start with #!/usr/local/bin/python3 or something like that, but it is just another barrier of entry. Not to mention that then there are most likely a bunch of additional modules that have to be tracked down to get something to work.
(Actually, come to think of it, this really isn't a python specific issue, this is really a general plea that if you are making something that is geared toward sysadmins, make sure that there is an easy enough installation method available for at least the currently supported versions of RHEL/Centos, Ubuntu LTS, and Suse Enterprise).
The implication that you cannot use Python 3 "if you are creating a project that you want to share with others" is false. Python 3 code is routinely shared and used.
There are a lot of enterprise systems that are still on COBOL, but that wouldn't make it any less ridiculous to say that you can't use Python if you are creating a project that you want to share with others.
In addition to what others said, if you write code for python 3 making it compatible with python2.7 is relatively painless task (from my own experience) due to Python 2.7 having py3 functionality back ported.
Regarding python 2.6 this was discontinued back in 2013, as far as I know all redhat versions that used 2.6 are no longer supported unless you're paying for extended support. So I'm not convinced it is a good excuse.
Also if you use Java for example and you want to use JDK7 or JDK8 it doesn't matter that RH distro uses old version you'll find a proper RPM (or build one) to use it.
RHEL 6 uses python 2.6, and is supported till 2020 (not counting the extended support). Of course, if an app only needs a single host or two, its no problem to have an updated OS for that app. But certain app types, such as backup and monitoring apps, don't do any good unless they run on the majority of systems that a site has. In my case, I couldn't even try out several popular backup systems, as they either required Python 3, or had other library requirements that weren't available on the older enterprise systems.
Same here. It really wasn't that long ago that I was sticking with Python 2 and recommending to my managers that new projects be started with Python 2. (Yes, I know, I was part of the problem)
These days, I have few reservations about starting new projects with Python 3. It has become my new default.
I still use python 2 because I'm not the best programmer and most stack overflow answers are python 2 based (although more and more are for both now-a-days). I wouldn't be opposed to using python 3, but see no real reason to do so.
People make it look like those are two completely different languages, even though there are actually very minor differences. You'll realize it once you'll start using python 3. Also py3 is much easier to master for a novice because it is much more consistent and no longer has some special cases.
For example if you're dividing a number you no longer need to convert one variable to be float in order to get a decimal. There is no int and long types, just int etc.
That's valid, but I don't think there are many questions that don't apply to both languages. I'd bet most times you will just have to change range() and print(). The difference in the two versions isn't too extreme!
Two quick notes, probably well-known in the Python community, maybe less so outside:
1. The biggest hurdle to overcome when moving existing code bases to Python 3 (affectionally known as py3k) is third-party libraries. This is the main reason why there is some pressure on library authors to move their libraries to Python 3. Cf. https://python3wos.appspot.com/ for some (very encouraging, IMHO) results.
2. It's possible, in many cases, and without two much effort, to write (or refactor) code that is compatible with both Python 2 and Python 3. One great tool for this is the python-future project:
Let's just keep in mind that the entire reason languages with immutable data are trending is because we already have all those languages with mutable data to use where they are fit.
A new language today shouldn't really implement Python metaprogramming, because Python already does that.
Yes, and I think the reason why Nim got so much attention was because it filled the niche of metaprogramming in a statically-typed, AOT language, which is probably the last frontier of mutability and metaprogramming. It's probably the last major new language with metaprogramming front and center we'll see for a very long time.
Honestly, I don't think I'll ever create my own general-purpose language precisely _because_ I already know languages that are perfectly tailored to my use. If I start thinking of "what would my dream language look like", I start mentally re-creating Python. And sometimes, I might be in a different mood, and I'll think "wait, I'll make a statically-typed, AOT version of Python that I can use as a systems programming language instead of C/C++", and I end up mentally re-creating Nim, even to the point of going "well, I'll borrow the static typing part from Modula-3 because Modula-3 is cool and Python has a lot of Modula-3 in it already...", which is basically what Nim did.
Indeed. That may mean fewer opportunities for "meta-programming" if by meta-programming we mean mutation of core structures. It could also be a good chance to turn around the default of declaring variables to declaring constants instead. Complex data structures would still have to be mutable by default I guess, but they would be the obvious next target to becoming immutable by default.
If you make my huge NumPy arrays immutable, I will be very unhappy (unless I am significantly misunderstanding how this everything-is-immutable strategy works).
There might be a lot to be said for rust-style immutable-by-default.
First, you'd still have optional mutability. Second, arrays are rather simple data structures, but ones where you often care about performance--small tight loops are common. Compare that to the data structure that eg holding your options and flags for the computation: you don't do much computation on that, but you have lots of invariants that you want to uphold---ie lots of code, but each bit executed roughly once, no tight loops.
And there would always be the optional mutability. Your complex data structures are seldom the ones driving performance.
Ie often you have to get the complex data structures right (lots of logic, not much computation) to set up your computations on the simple ones (eg lots of number crunching on your arrays).
Yes, it's easier to reason about correctness in immutability land and add mutability in after the fact as a compiler optimization than to remove mutability for safety as a compiler pass.
Honestly, I found that kind of disappointing, because one of the things I like most about Python is its metaprogramming capabilities.
I'm honestly not fond of languages like Rust that try to restrict mutability. The only one of the new breed of languages I've found that I really like is Nim.
Doesn't Nim go a bit too far in that direction, though? Mutable strings seems a bit dangerous to me – won't you have to copy them all the time to make sure they don't get modified somewhere else?
Practically speaking, Python is still slower than a lot of languages that use predominantly immutable data structures. I think you'd want to go to a systems programming language for raw speed anyway.
"Immutable" data structures is rather confusing name. Here it is a synonym for persistent data structure [1], which means that you operate on data structures by creating new "versions" of them, which share a lot of memory with the old versions. Parts of the program that refer to the old version see it as immutable.
Modern radix trees[1] can be as fast as the hash maps you'd find in a language's standard library while providing additional capabilities (ie fast ordered traversal). Making them immutable and persistent[2] is easy and only adds a bit of performance overhead.
Obviously programs that rely on communicating by mutating parts of a dictionary would have to be rewritten to use an immutable, persistent map—but they could still maintain similar performance.
On the other hand, stating "they could still maintain similar performance" for immutable dictionaries is misleading at best (you are right for dictionary sizes approaching 0). Anyone can look up benchmarks on the Internet showing real-world performance of immutable data structures and make their mind up if they actually can deliver in their particular use cases.
Mutability could be always used to boost performance on Von Neumann architecture if you know what you are doing. It's just becoming impractical due to increased complexity, pervasive security paranoia and the fact that there are very few people that can get things done right. If you want to contribute to advance your case, please create a competing computer architecture as fast as Von Neumann's, physically possible, but based on immutability.
Well, opt-in immutable dict would be awesome. I've found that many times my usage of set and list is more appropriate as their immutable counterparts (frozenset, tuple). Lots of times I create sets or lists with the intention that they be immutable and it would be clearer if I could specify to future me/maintainers/extenders that this should remain as initialized.
Yes, it is slow. Immutability is allowing you to write elegant concurrent algorithms at the expense of speed and memory. Imagine you are forced to never change an array of numbers for an algorithm computing matrices, rather creating new matrices for each step and specifying numbers only during initialization. But hey, you can cache them and keep track of every single copy without any confusion now! Now imagine a non-concurrent language like JavaScript - what exactly are you going to get by using immutable structures except for the feeling you are going with the times?
Reference counting and caching are probably the only areas where immutability can help for most programs.
Immutability is in fashion nowadays due to functional programming being in fashion despite the inability to ever achieve "pure" functional programming in reality - I/O is by default "dirty" (I really hate these charged words in computing), no amount of flatmapping (i.e. monads) is going to help with that.
Everything immutable (every data structure) is a really convenient way of thinking about processing data in the abstract sense. You can think about processing your (semi) structure data (records) in terms of high level pipelines and processing steps. And, for a whole class of problems this works well.
Certainly this abstraction can stretch pretty well. Esp. when your language / framework can optimize immutable data structures under the covers (mutable and reuse.) But the abstraction is just that an abstraction for a mutable platform.
The platforms of today (VMs, operating systems, hardware, storage) are not immutable and notoriously full of side effects and undefined behavior. So these immutable language / frameworks end up being good tools at one extreme (high level data processing), bad at the other (system level languages) and vary on a case by case basis.
In today's todays platforms if you want to allocate an object in your language think of the layers and layers of side effects that will happen. 1. The allocator/GC system allocates it from some kind of pool possibly. Lots of side effects there (like operating on a pool, triggering some kind of collection / cleanup, requesting a large mapping from the OS. 2. Then at the OS level accessing mapped memory has all sorts of side effect (paging, lazy initialization, etc). 3. At the hardware levels there's lots of side effects: instruction cache, multiple layers of data caches, TLB, DMA.
In conclusion, for some kinds of problems you don't need to worry about these details and for others you can't afford to ignore them.
Thanks, that's a great answer and helps me understand.
I guess there are subsets of problems where immutability is a good idea (anything needing state, 'Undo' capabilities), neutral, and a bad idea (number crunching, data processing, 'traditional'(??) programming).
I have been wowed by the beauty of the immutable approach in web apps, but definitely subscribe to the right technique for a task. I can't imagine doing the data processing in physics using an immutable language...
Indeed it boils down to what you need to achieve. You might want to use both approaches and combine them for best performance/readability.
There are specialized high-performance structures that are mutable (e.g. disruptor in HFT trading) optimizing all the way down to CPU cache lines to achieve unbelievable speeds, yet they work nicely if the objects they refer to are immutable.
So just take all advices to switch to immutability with a grain of salt and learn to figure out which approach is suitable for what part of your system to have it both performing well and being able to conceptualize and maintain the code nicely ;-)
> Reference counting and caching are probably the only areas where immutability can help for most programs.
There is another use spurred by React: when you have a new state to represent, you have to diff it against current state to know exactly what changed and work only on this diff. "Immutable" structures make this trivial.
His commentary starts at 12.38 in the video; it's really very thoughtful.
Here are some choice quotes:
...and why should you switch?
and ultimately, I'm not saying that you should switch.
I would like you to switch... but I also recognize that it's difficult to
switch, and it feels like a lot of hard work that you could also spend
instead on say, improving the design of your website, or adding features
to your application.
but...
So... python 3 is just a better language and it is getting better
over time.
Python 2 on the other hand, is a fine language, and it will remain
exactly what it is.
Yeah, we'll fix the occasional bug, but its asymptotically approaching
2.7 perfect, and its never going anywhere beyond there.
So, that's why you should switch to python 3, because the only way
to benefit from the good work the core contributors do, is by switching.
Seems reasonable to me.
Don't yell at people for using python 2; they have jobs and real like commitments to do things like running a company that pays people.
...but if you can use python 3, you should because it's only ever going to get better, while python 2 won't.
(and my apologies if I've transcribed that incorrectly, I did my best to capture exactly what he said)
This is the way we should think about it, imho. If you don't like Python 3, or have any other reason to stay at Python 2, do so! But don't be mad at people having a good time over at Python 3.
I try to make my projects work with both, although that is getting increasingly hard.
Why did you edit in the implication that Python 3 is not applicable for people who have jobs and real life commitments to do things like running a company that pays people? That is your opinion, not Guido's. Python 3 is already being used in companies and it's plenty good enough for that. Including library support. Dropbox using 2.7 is no different from Dropbox avoiding PyPy, they are evidently ultra-conservative (and probably have a lot of old code they don't want to port).
That is my opinion, my Guido's. That's why it's not...
In Quotes
I'm not saying python 3 isn't suitable for production.
I'm saying (and this is my opinion): If you have to use python 2, then do. If you can use python 3, then do.
THIS is Guido's opinion:
I would like you to switch... but I also recognize that it's difficult to
switch, and it feels like a lot of hard work that you could also spend
instead on say, improving the design of your website, or adding features
to your application.
...at least, I have to assume so, because that's what he actually said at the keynote.
So, take what you want from that, but don't accuse me of 'editing in' some implication.
If you're happy using python 3; then be happy.
...but, if you shout at someone for using python 2, I think you're a douche, because there are real life reasons some people keep using python 2.
...and I. And..., clearly, Guido, acknowledge that.
You don't have to. You can have your own opinion all you like... I honestly couldn't care less.
A rewrite (even a small one like moving from Python 2 to 3) costs dev and test time, and makes it difficult to do any other work simultaneously. It's not going to happen unless it has some value (perhaps, for example, they wanted to use the new builtin async syntax of Python 3.5 instead of rolling their own).
I think this is good insofar as he gets to see the pain first hand that a lot of companies with large 2.x codebases are having with the idea of upgrading to 3.x.
> There are also bugs that are feature proposals that do have patches attached, but there is a general hesitation to accept changes like that because there is concern that they aren't useful, won't mesh with other similar language features, or that they will cause backward incompatibilities. It is hard to take patches without breaking things all the time.
I agree with Guido that the thing I hate the most in Python is packaging in general. I find Ruby's gems, bundler and Gemfile.lock to be a much more elegant solution.
On the other hand, I really like the explicit imports (when used properly). Less magic that makes code navigation way easier.
As a distro packager, I find Python's packages to be much better and easier to integrate than Ruby gems. I've had no shortage of troubles with Ruby gems: requiring on the git binary to build the gem (even in a release tarball), test suites not being included in the gem releases on rubygems.org, rampant circular dependencies, etc. Python's PyPI has caused me no such issues.
"Dropbox has a lot of third-party dependencies, some of which cannot even be rebuilt from the sources that it has. That is generally true of any company that has millions of lines of Python in production; he also saw it at Google. That also makes switching hard."
Why can't these dependencies be built from the source they have presumably the same version as the currently build binary? Is it because other components of the build chain have changed or what?
As a civilian, I found this situation mildly disturbing.
At least in my experience, it's very common for vendors to provide you with the source (through a licensed agreement) but deliberately limit your ability to build it. Sometimes this is through licensing, sometimes it's by excluding some key portions of the build chain (e.g. internal libraries that don't directly relate the the functionality of the software).
At any rate, at my current employer, we have lots of things where we have the source as well as the libraries and headers, but don't have the capability to build those libraries from source.
It sounds silly, but it's still vastly better than having no source access whatsoever. At least for us, we often need to understand an implementation detail of a proprietary library, but being able to actually build things is secondary.
There are both syntactic as well as behavioural changes.
A simple example is the print statement, in Python 2 you could do:
print "Hello World!"
Whereas in Python 3 you have to do:
print("Hello World!")
Now obviously, you can easily change this programmatically. Things get harder when you use more advanced print statements, for example adding a "," at the end to prevent a newline, printing to different targets, new ways of using placeholders, et cetera.
Then there is the fact that strings are no more in Python 3, but they are Unicode. Again for average strings that's not really a problem, but when you start using bytes, special characters, et cetera.
There is a tool to translate programs from Python 2 to Python 3, but it does not catch everything, and you still have to fix stuff manually.
Most of the common 2.x programs don't depend on the change in Unicode semantics or whatever, but they really don't work simply because really unnecessary changes in the way print and formatting must be written in 3.x. It was technically possible to make much more programs still work while still introducing the more convenient syntax and it was in my opinion bad judgement not to do so.
In my view, those that made the decision took to seriously "there's only one way to do it." Zen of Python actually says: "There should be one-- and preferably only one --obvious way to do it." Even if the obvious way for 3.x can be different from 2.x, I can't imagine any real-life effects from allowing the alternative syntax for formatting and printing except for a few special-case branches somewhere in the source. Yes, then the people would continue to write it with the "wrong syntax" in the new programs too, so what?
In Python3 print is now a function, and no longer a keyword. The syntax got strictly simpler that way, and semantics, too.
You could try to make the interpreter guess when you want to use print as a function and when as a keyword. But that would be horribly complicated, and is bound to go wrong.
No it wouldn't. There would be some corner cases, but mostly it would just work for the existing code whereas, from the perspective of the 2.x users, now it just doesn't. I know, I write compilers for living. The maintenance cost from the point of view of the compiler maintainer would almost invisibly increase (there are much less trivial things to worry about) the benefit for the current users would be significant. The reason it wasn't done is much more "political" ("just one way to do it") than technical.
Specifically: once it's declared that print is a function, you don't need to treat the string print as keyword. Then you can notice comparing print expression and print( expression ) that if you know that print is a function the braces aren't giving you any new information. So the difference is do you want to encode the knowledge "print is a function" in the compiler or not. That encoding is trivial, and even if it can be called "a special case" isn't anything that anybody would spend any significant energy maintaining. It obviously appears to be "less elegant" to describe your compiler having "a special knowledge that print is a function" but there are even ways out of that: you can generalize such constructs (function calls without using the return value). But then "there would be more than one way to do it."
But orders of magnitude more 2.x Python programs would "just work" when started under a such 3.x. Of course, once you accept that the transition should be less painful, you'd need provide the way for libraries to also have the "newer" and the "older" ways to do it. "More than one way" is potentially contagious. But, sometimes "worse is better."
OK, as a litmut test, how would your proposed compromise Python variant deal with the following program
print
As Python 3, this program does nothing. As Python 2 it prints a newline.
I do agree that Python could have adopted the ML/Haskell syntax for calling functions that does away with most parens. But I don't think anyone in Python land would have swallowed that.
It's clear, in Python 3 that would have to behave as in Python 2 if our goal were to have most of existing Python 2 programs "still working" when people give them to Python 3.
Hardly any changes at all, and the things that did change didn't actually make a clear difference in productivity or the clarity of your code. The ability to make breaking changes wasn't really used to improve the language, maybe except for unicode handling – no new syntax (would've loved functions-as-blocks myself), no getting rid of old crufty syntax, nada.
Python 2.7 supports string.format and Python 3 supports % formatting.
The only actual difference in that example is that print became a function in Python 3, requiring some additional parentheses. (Something that can be mechanically translated without much hassle.)
For me it's not really that there are things keeping me away, it's more that py2.7 still seems fine for everything I want to do. I know all the libraries I'm used to are supported and I can make stuff work. There are some small hurdles to moving and I have no pressing need to jump over them.
The non-existing support of some crucial libraries that you are depending on. That, in my opinion, is the number one reason holding people back from upgrading.
Fortunately that is getting better and better, it is just a very slow process.
Old Python handled strings in a really dumb way that only seemed to work for monolingual Americans and (to a lesser extent) Western Europeans.
New Python fixed that.
Problems:
1) new Python wasn't very tolerant of real-world conditions where incorrect text encoding happens once in a while.
1b) there was great resistance to fixing that because "Python now did it correctly!" and the rest of the world was just assumed to do it correctly, always.
2) not everybody who already programmed in Python really understood the need for the new (and mostly correct) way of handling strings. Also not everybody in the US really understood Unicode and encodings.
3) there was no proper update path from Old Python to New Python.
4) it was not possible (or very difficult) to write code that was both valid Old Python and valid New Python and which did the right thing in both cases.
5) at the same time the interface for libraries written in C was changed.
5a) the new way was better.
5b) a change was needed for New Pythons string handling anyway.
5c) it could still have been done in a backwards-compatible way...
5d) ... but it wasn't, since the new way was Better.
6) lots of important Python libraries are partially written in C because pure Python is so slow
7) ... so porting all the important libraries was necessary for New Python to take off while at the same time being rather annoying and difficult work.
There were other changes at the same time that were improvements but which made upgrading hard. Print was no longer a keyword with lots of special handling and corner cases but an ordinary function. You could switch newer versions of Python 2.x to the same behaviour but not the older ones. Many libraries needed to work with many different Python 2.x versions, which made it hard to support Python 3.x at the same time. You could convert all your print statements to using a function that worked like the new print function and then either use that natively (3.x and newer 2.x) or provide a compatibility function that wrapped the print statement (older 2.x). You would put this code in a module you would selectively import depending on the Python version -- or some variant of that. Not actually hard but really annoying and something that required changing many lines of some libraries.
Ad 1, things have improved a lot and there are now known ways of fixing the remaining problems (with newer 3.x versions).
Ad 2, Unicode is better understood these days.
Ad 3 and 4, things have again improved. If you skip the first few 3.x versions and the early 2.x versions then it is not too hard to write code that works on both Old and New Python.
Ad 5, I don't know if is possible to support both ABIs in a single shared library, but at least the core code that actually does the work can be shared.
PyPy might have helped as well by making ordinary Python -- the kind you would otherwise write in C -- run fast.
> Python 3 is a "much better language" than Python 2, for one thing. It is much easier to teach.
[citation needed]
I find it much easier to teach language, where range(5) _is_ a list [0,1,2,3,4] rather than an iterator. Same thing for .keys() and .items(). For people learning their first language iterators are not simple.
That said, right now I am teaching Python 3 rather than Python 2 (mostly due to unicode and division). Though, I have mixed filling towards Py2/Py3 transition (it's not all web dev, where all serious projects are getting ported; some projects are written only once and therefore changing to Py3 means letting them die).
I find it much easier to teach language, where range(5) _is_ a list [0,1,2,3,4]
I thought so too at first but I think it was because I dislike change. When I really began to use Python 3 in earnest I came to like the idea because I've almost never taken the value of range. And teaching loops without intermediate variables is much easier.
For example I would never teach this in Python:
for i in range(10):
print mylist[i]
I would always teach this:
for i in mylist:
print(i)
Which is clearer and easier to read? Did I have to teach you the Iterator protocol in order to understand that? No. It almost never comes up until you're an advanced Python user.
In the rare case where I do need to take the value of range it becomes much more explicit:
nums = list(range(10))
If you're an old-time Python 2 user you might ask why you have to pass range to the list constructor when it returns a list itself. But if you're a new Python programmer learning Python 3 you don't have those expectations: range is something you loop over and list is happy to do that for you. You could also write it out long-hand:
nums = [i for i in range(10)]
Which could be nicer if you wanted to transform the numbers or add some conditional filters.
Over all I find Python 3 to be much more consistent in its design than Python 2 and thus easier to teach.
- object castings are not nice (for code brevity/clarity - things, which Python aims to bee good at),
- if objects are similar, it's pain to explain the difference (it's not a thought - it's a thing I had to deal with many times).
In particular, if one is advanced programmer in general, or at least has a firm grasp of Python basics, iterators are nice. But for newbies, they are harder. (And it's even harder to hand-wave that they are "somethings, which can be casted into lists".)
As a side note I dislike possibility of using tuples as iterables. It breaks "There should be one-- and preferably only one --obvious way to do it." i.e. using tuples in place where one (conceptually) should use a list.
And again, explaining difference to newbies is painful ("tuple is a handicapped list").
I don't begin teaching someone programming by explaining objects and types.
I generally start by introducing three fundamental concepts: variables, conditionals, and loops. And I keep it simple to begin with:
a = 1
print(a) # 1
a = 2
print(a) # can you guess what it will print?
Then I add conditionals:
if a == 2:
print("it is two!")
else:
print("it is not two...")
And I only really cover 'for' at first:
groceries = ["ham", "cheese", "eggs"]
for item in groceries:
print(item)
Along the way I explain some of the primitive data types: string, integer, float; and containers such as list and dictionary. And that is usually enough to get started with simple tasks. I tend to elide what functions are and just call them "commands" until later on so that I can demonstrate why looping is so cool:
import turtle
turtle.setup()
for amount in range(100):
turtle.forward(amount)
turtle.left(75)
And that usually drives home the point: grouping commands together to repeat them as many times as we wish using loops; variables hold data; and conditionals let us do different things depending on whether something is True or False.
I haven't had much trouble with this approach for years. I don't even get to explain iterables to newbies most of the time! Once in a while someone tries something like:
a = "foo"
a + 1
And they get TypeError or they pass in an object of the wrong type to a function and get ValueError. Early on this usually isn't a problem because some things just don't make sense like adding numbers and strings. However it can get confusing when learning how to look up functions and use them because we can only informally document what kinds of things a given function will take in its signature... it's an advantage and disadvantage of the duck-typing philosophy. It's a wart but one that I haven't really encountered with anyone I've taught until they're pretty far along and able to help themselves.
Outside of the repr() it's not that easy to tell that range(5) isn't a list, is it? It's an indexable list-like thing that just happens to not be exactly a list.
Feels like people will be urged to move to Python 3 due to the new features... And yet I'm thinking that Python 4 will happen before many have been able to move to Python 3. :-P
Because Python 4 will be only "whatever comes after the last 3.x", with no backwards-incompatible changes, it won't matter: porting to "python 4" will require the same effort as porting to 3. So start now ;)
When starting a new project from scratch, I see no reason NOT to use python 3. Maybe if a few libs you depend heavily on are still not ported.
Even in that case, though, it might be a good idea to start on Python 3. First because most libraries are already ported[1], and second, you could try porting that specific lib, or maybe running it as a separate service using python 2.
I guess the problem remains about maintaining big, Python 2-based programs.
[1] - https://python3wos.appspot.com/