Linus Torvalds Answers Your Questions
閱讀本文約花費: 17 (分鐘)
Monday you had a chance to ask Linus Torvalds any question you wanted. We sent him a dozen of the highest rated and below you’ll see what he has to say about computers, programming, books, and copyrights. He also talks about what he would have done differently with Linux if he had to do it all over again. Hint: it rhymes with nothing.
The Absolute Death of Software Copyright?
by eldavojohn
Recently you spoke out about software patents and the patent process. But I was interested in what you said about how “nasty” copyright issues could get. You use SCO as the obvious nightmare case but what about violations against open source licenses like the GPLv3? Would you care if someone forked the Linux kernel and made major modifications to it and started selling it without releasing the code to the customers? What does your ideal situation look like for open source and commercial closed source? Would you just copy the Finnish model and aren’t you afraid American experts are just as daft as American juries?
Linus: So I like copyrights, and even on patents I’m not necessarily in the “Patents are completely evil” camp. When I rant about patents or copyrights, I rant against the *excesses* and the bad policies, not about them existing in the first place.
The patent problems people on slashdot are probably familiar with: the system is pretty much geared towards people abusing it, with absolutely ridiculous patents being admitted, and it hindering invention rather than helping it. The failures are many, and I don’t know how to fix it, but much stricter limits on what can be patented are clearly needed.
People were apparently surprised by me saying that copyrights had problems too. I don’t understand why people were that surprised, but I understand even *less* why people then thought that “copyrights have problems” would imply “copyright protection should be abolished”. The second doesn’t follow at all.
Quite frankly, there are a lot of f*cking morons on the internet.
Anyway, the problems with copyright come from absurdly long protection periods, and some overly crazy enforcement. And don’t get me wrong: I don’t actually think that these problems show up all that much in the software industry. The case of SCO was not, I think, so much a failure of copyright law itself: sure, it was annoying, but at the same time it was really more about a psychopathic company with a failed business that tried to game the system. Tried, and lost. And yes, that fiasco took much too long, and was much too expensive, and should have been shut down immediately, but that whole “using the law for harassment” in the US is a separate issue independent of the copyright issues.
No, when I stated that copyright protection is too strong, I was talking about things like “life of author+70 years” and the corporate 95-year version. That’s *ridiculous*. Couple that with the difficulty of judging fair use etc, and it really hinders things like archival of material, making of documentaries, yadda yadda…
So I personally think that IP protection isn’t evil in itself – but that it turns evil when it is taken too far. And both patent protection and copyright protection has been taken much much too far.
Scale the term limits back to fifteen years or so, and copyrights would be much better.
When I’m designing a processor for Linux.
by Art Popp (29075)
I spend some time designing things in Verilog and trying to read other people’s source code at opencores.org, and I recall you did some work at Transmeta. For some time I’ve had a list of instructions that could be added to processors that would be drastically speed up common functions, and SSE 4.2 includes some of my favorites, the dqword string comparison instructions. So…What are your ideas for instructions that you’ve always thought should be handled by the processor, but never seen implemented?
Linus: I actually am not a huge fan of shiny new features. In processor design – as in so much of technology – what matters more is interoperability and compatibility. I realize that this makes people sad, because people are always chasing that cool new feature, but hey, in the end, technology is about doing useful things. And building and extending on top of existing knowledge and infrastructure is how 99% of all improvement gets done.
The occasional big shift and really new thing might get all the attention, but it seldom really is what matters. I like to quote Thomas Edison: “Genius is 1% inspiration, 99% perspiration”. And that very much covers CPU architecture too: the inspiration is simply not as important as executing well. Sure, you need some inspiration, but you really don’t need all that *much* of it.
So in CPU design, what should really be looked at is how well the CPU is able to do what we expect. The instruction set is important – but it is important mainly as a “I can run the same instructions the previous CPU did, so I can run all your programs without you having to do any extra work” issue – not as a “what new cool feature would you want in an instruction set”.
To a CPU architect, I’d tell them to do the best they damn well can in the memory subsystem, for example. Regardless of instruction set, you’ll want a great memory subsystem end-to-end. And I don’t just mean good caches, but good *everything*. It’s a hell of a lot of detail (perspiration), and I guarantee you that it will take a large team of people many generations to do really well on it. There is no simple silver bullet with a cool new instruction that will solve it for you.
And don’t get me wrong – it’s not *all* about the memory subsystem. It’s about all the other details too.
Now, when it comes to actual instructions, I do tend to think that the world has shifted away from RISC. I’m a big believer in being good at running existing binaries across many different micro-architectures – the whole “compatibility” thing. And as a result, I think fragile architectures that depend on static instruction scheduling or run in-order are simply insane. If your CPU requires instruction scheduling for one particular set of instruction latencies or decoder limitations, your CPU is bad. I detested Itanium, for this reason – exposing the microarchitecture in the instruction set is just insane.
No, I want out-of-order and “high-level” instructions that actually work across different implementations of the same ISA, and across different classes of hardware (iow, span the whole “low-power embedded” to “high-end server” CPU range). So for example, I think having a “memcpy” or “memset” instruction is a great idea, if it allows you to have something that works optimally for different memory subsystems and microarchitectures.
As an example of what *not* to do, is to expose direct cacheline access with some idiotic “DCBZ” instruction that clears them – because that will then make the software have to care about the size of the cacheline etc. Same goes for things like “nontemporal accesses” that bypass the L1 cache – how do you know when to use those in software when different CPU’s have different cache subsystems? Software just shouldn’t care. Software wants to clear memory, not aligned cachelines, and software does *not* want to have to worry about how to do that most efficiently on some particular new machine with a particular cache size and memory subsystem.
What would you have done differently?
by Rob Kaper
It’s been over twenty years since the inception of Linux. With 20/20 hindsight, what you have done differently if you had had today’s knowledge and experience back in the early days?
Linus: I get asked this quite often, and I really don’t see how I could possibly have done anything better. And I’m not claiming some kind of great forethought – it’s just that with 20:20 hindsight, I really did choose the right big things. I still love the GPLv2, and absolutely think that making Linux open source was the greatest thing ever.
Have I made mistakes? Sure. But on the whole, I think Linux has done incredibly well, and I’ve made the right decisions around it (and the big things have *occasionally* been about technical issues, but more often about non-technical things like “Don’t work for a commercial Linux company even if it seems like such a natural thing to do – keep working in a neutral place so that people can continue to work with me”)
Monolithic vs. Micro-kernel architecture
by NoNeeeed
Has there ever been a time in the development of the Linux Kernel where you’ve wished you’d gone the Hurd-style micro-kernel route espoused by the like of Tannenbaum, or do you feel that from an architectural standpoint Linux has benefited from having a monolithic design?
Linux has been massively more successful than Hurd, but I wonder how much of that is down to intrinsic technical superiority of its approach, and how much to the lack of a central driving force supported by a community of committed developers? It always seemed like the Hurd model should have allowed more people to be involved, but that has never seemed to be the case.
Linus: I think microkernels are stupid. They push the problem space into *communication*, which is actually a much bigger and fundamental problem than the small problem they are purporting to fix. They also lead to horrible extra complexity as you then have to fight the microkernel model, and make up new ways to avoid the extra communication latencies etc. Hurd is a great example of this kind of suckiness, where people made up whole new memory mapping models just because the normal “just make a quick system call within the same context” model had been broken by the microkernel model.
Btw, it’s not just microkernels. Any time you have “one overriding idea”, and push your idea as a superior ideology, you’re going to be wrong. Microkernels had one such ideology, there have been others. It’s all BS. The fact is, reality is complicated, and not amenable to the “one large idea” model of problem solving. The only way that problems get solved in real life is with a lot of hard work on getting the details right. Not by some over-arching ideology that somehow magically makes things work.
Avoiding the Unix Wars
by dkleinsc
Why do you think Linux has been able to (mostly) avoid the fragmentation that plagued the competing Unixes of the 1980’s? What would you say helps keep Linux a unified project rather than more forked system like BSD?
Linus: So I’m a huge believer in the GPLv2, and I really do believe the license matters. And what – to me – is important for an open-source license is not whether you can fork (which the BSD’s allow), but whether the license encourages merging things back.
And btw, before people go all “license flamewar” on me, I would like to really emphasize the “to me” part. Licensing is a personal choice, and there is no “wrong” choice. For projects *I* care about, and that I started and can make the licensing decision for, I think the GPLv2 is the right thing to do for various reasons. But that does *not* mean that if somebody else makes another choice for his or her code, that wouldn’t be the right choice for *that* person.
For example, I’d use a BSD-like license for code that I simply didn’t care about, and wanted to just “push out there in case somebody else wants to use it”. And I don’t think proprietary licenses are evil either. It’s all fine, it’s up to the original author to decide what direction you want to do in.
Anyway, to just get back to the question – I really do think that encouraging merging is the most important part for a license for me. And having a license like the GPLv2 that basically *requires* everybody to have the right to merge back useful code is a great thing, and avoids the worry of forking.
And I do want to say that it’s not that forking is bad. Forking is absolutely *required*, because easy forking is how development gets done. In fact, one of the design principles behind git was to make forking easy, and not have any technical barrier (like a “more central repository”) that held back forking. Forking is important, and forking needs to happen any time there is a developer who thinks that they can do a better job in some area. Go wild, fork the project, and prove your point. Show everybody that you can make improvements.
But forking becomes a problem if there is no good way to merge things back. And in Linux, it’s not been just about the license.Sure, the license means that legally we can always merge back the forks if they prove to be good forks. But we have also had a culture of encouraging forking and making forking be something that isn’t acrimonious. Basically *all* the Linux distributions have had their own “forks” of the kernel, and it’s not been seen as something bad, it’s been seen as something natural and *good*. Which means that now the fork is all amicable and friendly, and there are not only no legal issues with merging it back into mainline, but there are also generally no big personality clashes or bad feelings about it either.
So it’s not that Linux doesn’t fork, it’s that we’ve tried to make forks small and painless, and tried to be good about merging things back. Sure, there are disagreements, but they get resolved. Look at the Android work, for example: yeah, it wasn’t all happy people and no problems, and it took a while, but most of it got merged back, and without excessively bad feelings, I think.
GIT
by vlm
If you had to do GIT over again, what, if anything, would you change?VERY closely related question, do you like the git-flow project and would you think about pulling that into mainline or not?
Linus: So there’s been a few small details that I think we could have done better, but on the whole I’m *very* happy with git. I think the core design is very solid, and we have almost zero redundant information, and the core model is really built around a few solid concepts that make a lot of sense. Git is very unix-like in that it has a few basic design things (“everything is an object” with a few basic relationships between the different objects in the git database) and then a lot of utility is built up around that whole thing.
So I’m very proud of git. I think I did a great design, and then others (and Junio Hamano in particular) have taken that great design and really run with it. Sure, it wasn’t all that pleasant to use for outsiders early on, and it can still be very strange if you come from some traditional SCM, but it really has made my life *so* much better, and I really think it got the fundamentals right, in ways that SCM’s that came before did not.
As to git-flow, I want to really re-iterate how great Junio Hamano has been as a git maintainer, and I haven’t had to worry about git development for the last five years or so. Junio has been an exemplary maintainer, and shown great taste. And because I don’t need to, I haven’t even followed some of the projects around git, like git-flow. It’s not what I need for *my* git workflow, but if it helps people maintain a good topic-branch model with git, then all the more power to them. And whether it should go into mainline git or not, I won’t even comment on, because I absolutely trust that Junio will make the right decision.
Storage advancements in the kernel?
by ScuttleMonkey
Now that Ceph is gathering momentum since having been included in the mainline kernel, what other storage (or low level) advancements do you see on the horizon? (full disclosure: I work for Inktank now, the consulting/services company that employs most of the core Ceph engineers)
Linus: I’m not actually all that much of a storage guy, and while I’m the top-level kernel maintainer, this is likely a question that would be better asked of a number of other people.
The one (personal) thing storage-related that I’d like to re-iterate is that I think that rotating storage is going the way of the dodo (or the tape). “How do I hate thee, let me count the ways”. The latencies of rotational storage are horrendous, and I personally refuse to use a machine that has those nasty platters of spinning rust in them.
Sure, maybe those rotating platters are ok in some NAS box that you keep your big media files on (or in that cloud storage cluster you use, and where the network latencies make the disk latencies be secondary), but in an actual computer? Ugh. “Get thee behind me, Satan”.
That didn’t answer the question you really asked, but I really don’t tend to get all that excited about storage in general.
favorite hack
by vlm
I asked a bunch of hard architecture questions, now for a softball Q. Your favorite hack WRT kernel internals and kernel programming in general. drivers, innards, I don’t care which. The kind of thing where you took a look at the code and go ‘holy cow that’s cool’ or whatever. You define favorite, hack, and kernel. Just wanting to kick back and hear a story about cool code.
Linus: Hmm. You do realize that I don’t get all that close to the code any more? I spend my time not coding, but reading emails, and merging stuff others wrote. And when I *do* get involved with the code, it’s not because it’s “cool”, it’s because it broke, and you’ll find me cursing the people who wrote it, and questioning their parentage and that of their pets.
So I very seldom get involved in the really cool code any more, I’m afraid. I end up being involved in the “Holy sh*t, how did we ever merge that cr*p” code. Perhaps not as much as Greg (who has to deal with the staging tree), but then Greg is “special”.
That said, we do have lots of pretty cool code in the kernel. I’m particularly proud of our filename lookup cache, but hey, I’m biased. That code is *not* for the weak of heart, though, because the whole lockless lookup (with fallbacks to more traditional locked code) is hairy and subtle, and mortals are not supposed to really look at it. It’s been tweaked to some pretty extreme degrees, because it ends up being involved any time you look up a filename. I still remember how happy I was to merge the new lockless RCU filename lookup code last year.
At the opposite end of the spectrum, I actually wish more people understood the really core low-level kind of coding. Not big, complex stuff like the lockless name lookup, but simply good use of pointers-to-pointers etc. For example, I’ve seen too many people who delete a singly-linked list entry by keeping track of the “prev” entry, and then to delete the entry, doing something like
if (prev)
prev->next = entry->next;
else
list_head = entry->next;
and whenever I see code like that, I just go “This person doesn’t understand pointers”. And it’s sadly quite common.
People who understand pointers just use a “pointer to the entry pointer”, and initialize that with the address of the list_head. And then as they traverse the list, they can remove the entry without using any conditionals, by just doing a “*pp = entry->next”.
So there’s lots of pride in doing the small details right. It may not be big and important code, but I do like seeing code where people really thought about the details, and clearly also were thinking about the compiler being able to generate efficient code (rather than hoping that the compiler is so smart that it can make efficient code *despite* the state of the original source code).
Books, Books, Books
by eldavojohn
As a software developer, I have a coveted collection of books. A few of said tomes — both fiction and non — have fundamentally altered the course of my life. Assuming yours aren’t just man pages and .txt files, what are they?
Linus: I read a fair amount, but I have to admit that for me reading tends to be about escapism, and books to me are mostly forgettable. I can’t really think of a single case of a book that struck me as life-changing, the way some people apparently find some book that really changed the way they think.
That said, I’ll point to a couple of books I really enjoyed. On the non-fiction side, Richard Dawkin’s “The Selfish Gene” was one book that I think is pretty influential. On the fiction side, as a teenager I enjoyed Heinlein’s “Stranger in a strange land” a lot, and I have to admit to “Lord of the Rings” having been pretty important to me – but for a slightly odd reason, not as a huge Tolkien fan. For me, it was one of the first “real” books I read in English, and I started with a dictionary by my side, and ended it reading without needing one.
These days, I still read crap. I like my Kindle, and often read the self-published stuff for 99c. There are some real stinkers in there, but there’s been a lot of “that was certainly worth the 99c” stuff too. I’ve also enjoyed just re-reading some of the classics I grew up with – I just re-read both the Count of Monte Cristo and the Three Musketeers, for example.
How do you deal with burn-out?
by kallisti5
You must of been burned out on Linux kernel development multiple-times over by now… how do you deal with it?
Linus: Oh, I really enjoy what I do. And I actually enjoy arguing too, and while I may swear a lot and appear like a grumpy angry old man at times, I am also pretty good at just letting things go. So I can be very passionate about some things, but at the same time I don’t tend to really hold on to some particular issue for too long, and I think that helps avoid burn-out.
Obsessing about things is important, and things really do matter, but if you can’t let go of them, you’ll end up crazy.
So to me, some of the occasional flame-wars are really just invigorating. And the technology and the use cases end up changing enough that things never get *boring*, so I actually have not been close to burning out very often.
The one really painful time was some time during the middle of the 2.4.x series (about ten years ago), before I got to hand it over to stable maintenance, and we really had a lot of problems going on. You can google for “Linus doesn’t scale” and various other threads about the problems we had back then, and it really was pretty painful. The kernel was growing and I wasn’t keeping up, and BitKeeper and some fairly painful process changes really ended up helping a lot.
Describe your computer
by twistedcubic
Can you describe in detail your home and work computers, including processor, motherboard, and graphics card? And also say something about their compatibility with Linux?
Linus: My home computer isn’t actually all that interesting: I don’t need all that much CPU power any more, and for the last several years, my primary requirement (since CPU’s are fast enough) has been that the system be really really quiet, and that it has a good SSD in it. If our cat deigns to jump into my lap while I’m working, the loudest noise in the room should be the purring of the cat, not the computer.
So my main desktop is actually a 4-core Westmere machine, not really anything special. The most unusual part of the machine is probably just the fact that it has a good case (I forget the exact case name now) which avoids rattling etc. And one of the bigger Intel SSD’s. I think I’ll be upgrading some time this fall, but I will have had that machine for two years now, I think.
My laptop (that I’m writing this with, since I’m traveling in Japan and Korea right now) is an 11″ Apple Macbook Air from last year (but running Linux, of course – no OS X anywhere), because I really hate big laptops. I can’t understand people who lug around 15″ (or 17″!) monsters. The right weight for a laptop is 1kg, no more.
Re:The End
by Narnie
Speaking of ends, one day you’ll pass on your duties. How do you envision the kernel and the Linux ecosystem after passing your reigns?
Linus: Oh, the kernel really has a very solid development community, I wouldn’t worry about it. We’ve got several “top lieutenants” that could take over, and I’d worry much more about many other open-source projects that don’t have nearly the same kind of big development community that the kernel does.
That said, I’ve been doing this for over twenty years now, and I don’t really see myself stopping. I still do like what I’m doing, and I’d just be bored out of my gourd without the kernel to hack on.
from: https://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions
No tags for this post.