Git and GitHub

Submodules

7.11 Git Tools - Submodules
If Git needs to modify a changed file during the checkout of a branch, the checkout fails with a "checkout conflict" error. And those are the people who could really help free software. The DbConnector directory is there, but empty. Do we need them all? Distributed naming is very hard.

Screenshot:

How to install own git server with ssh and http access by using gitolite and gitweb in CentOS

It was not a new concept. Git took a lot from BitKeeper which was not only not new but was deeply familiar to Kernel developers after years of use. If you are worried about 7 git push -f and accidental remote branch deletions have a look at http: One thing I find a bit frustrating in this whole debate is that in theory, interface and underlying capabilities should be separate and interchangeable.

It ought to be possible to have a git-like interface Mercurial if you want that extra power eg, rewriting published history , and it ought to be possible to have a Mercurial-like interface to Git for ease of use. But instead of this output:. Steveko and a few more people here are very reasonable in their expectations that in the year we deserve to have tools with smart, rather than cryptic, user interface.

And smart error and warning messages, that respect the user. But the Linux community was always opposite from that. Now that they have learned everything by heart, it seems easy. After all we know that an average human being lives for years, out of which the youth lasts for the first Because that would mean they would have to learn it again.

Now that they know how to use the tool, why would newbies have it any easier than they had? Let them go through the same hell. Let everyone forever use crappy UI just because they had to use it. Let in the 24th century Capt. At the risk of over-repetition, tools that provide a friendly and much easier-to-understand front-end to git already exist. My favourite is SmartGit, which I have used for several years, but I am now using Sourcetree at work and I have to say that I am very comfortable with it too.

To everyone who is wrestling with the command-line — destress your life by downloading SmartGit, Sourcetree and GitKraken then picking the one you fancy.

There is a group of people who do not value good interfaces. But they are a subset of computer users, not a subset of the Linux community. Took long enough though…. You might be able to trigger it via remote hooks somehow though. This is a good example of the mindset of the average Hg user: I use hg-git whenever I interact with a git repo. When I use it for larger-scale development, I use a bridge repo hg clone git: For working on something I then clone the bridge-repo, hack on it, push back into the bridge repo and push from the bridge repo into git.

For merging history, did you check the convert extension? That can rewrite filepaths, get partial history and all that stuff. Your git diagram illustrates how to contribute to a project that you are not an approve contributor on.

Most projects have tight reins over who can contribute, and they would tell you to submit a patch. My guess, based on admittedly limited experience, is that it was much more common to grant direct commit access on Subversion projects than Git ones. MediaWiki is a case in point: Contributors with push access need to be painfully aware of what forcing a push does.

Much the same applies to rebasing. Git encourages you to commit frequently with the promise that you can tidy up later. You assume that the SVN user has direct commit access to their own project but the Git user is collaborating with third-parties on Github.

If you frequently commit small, poorly-defined chunks of work to a public repository you will soon receive complaints. Your assumption that those local commits have no value is wrong. They ability to locally commit and use it as a worklog is fantastic, and if you fix a bug you can just cherry-pick it to a production branch from that local branch. Following on that, the idea the squash and cherry-pick are complex commands is borderline insane to me, they are both exceptionally straightforward, some of the easiest parts of git to learn… it sounds like someone has never bothered to try them if they are having issues with them.

Seriously, let me teach you git cherry-pick. Find the commit you want… note its hash. Checkout the branch you want that applied to… git cherry-pick … you now have enough git-fu to cherry-pick. Regarding cleaning up existing commits extremely difficulty according to you. So, lets say you have done 8 ugly commits, and you want to clean them up….

I suspect people are confusing git being hard, with an absolute inability to be bothered to learn the tool. It amounts to a new VCS, except its interface is not really more usable than git.

So why am I using hg, again? Even if hg is easier and saner at the beginning, if you need staging then git very quickly becomes easier because hg requires you to use mq to work on your commits, while git allows you to use a pretty good VCS to work on your commits namely, git.

I should not be forced to start proliferating working copies just to put together clean commits to a public repo, that is insanity. Why use mq at all? The workflow should be kept as simple as possible. One of the main points of git is that maintenance is pushed to the edge, it allows for greater scaling. Imagine if Linus had to do the house keeping for every coder on the kernel; nothing would be achieved.

And then there is http: I have paid for free software, but the non-free stuff you did to Linux about forbidding reverse-engineering the protocol is unforgivable, especially if you plan to keep on doing that. I suppose a better answer might have been a stripped down free version no guis, no subrepos, no binary server and a for-pay enterprise-y answer.

In retrospect, that might have been a better answer. As for not reverse engineering the protocol, I know everyone hates that but we were trying to protect what little business model we had. GitHub is making squillions off hosting Git, yet as I understand it, the Github code itself is closed source. In a nutshell, if github turns around tomorrow and demands money, or it cuts off access, then everyone who currently uses github will simply switch to another repo for upstream. There is a substitute.

There are several, in fact. Check out bzr, darcs, or my favourite, Mercurial. The very defensive counter arguments from self proclaimed powerusers, is very shaky. There are just so many permutations of potential error and problems with Git that even power users are most likely ignorant of just how many errors they introduce into thier beloved system. He writes as he thinks. Mercurial, having the same powerful features, looks way more professional.

Since git relies on shell scripts for extensions, you cannot actually change the commands without breaking everyones scripts…. That porcellaine is pretty much set in stone, but the name fits since it makes everything built on it extremely brittle…. Sometimes in established, large projects , that forced structure is a good thing. How are you forced by git to make feature branches? If you like you can work directly in mater and never create any branches.

Or, if you can prefer, you can make all your changes in the same branch and then pull that into master, this might be good if you are using github. Since git relies on shell scripts for extensions, you cannot actually change the commands without breaking everyones scripts… That porcellaine is pretty much set in stone, but the name fits since it makes everything built on it extremely brittle….

In general, when dealing with an interface of any sort, you never change existing interfaces, you only add new interfaces. You do not even deprecate old interfaces until after you have addressed most all of the needs being addressed by the older forms. And deprecating basically means you go for some period of years with the old interfaces working, but setting off warnings when they are used. Actually, introducing new interfaces can get crazy also. The most useful thing you can do with most interface changes is to reject them….

You can do that. Of course, and this is worth repeating: Git, despite its protestations, is a social tool. Why should you need to change the commands? Because if you are not already used to git, the current commands are pretty unintuitive and hard to learn. If you want an unchanging interface, you have to design it cleanly from the start and be extremely conservative with changes. And the easiest way to state that you want an unchanging interface is to tell people to use it in scripts.

As soon as you have enough users, every change to an existing option wreaks havoc to so many scripts which you do not control, that you cannot actually do the change. Arne is confusing the API, where your semantics are fixed once you name them, with user interface.

This kind of simplified FUD is just not constructive. Anything you can do in subversion you can also do in git. Try to maintain a large project in Subversion before you say such things. Its a horror and all the simple things about the subversion api is just not worth anything. I worked on a 3Gb repo at a previous company, without any problems.

Part of the problem is that no-one considers the cases of a system they always prefer to spread FUD in favour of their preferred system. There are thousands of great open source developer tools out there.

The Linux and Gnu communities have a particularly unfortunate, and highly influential attitude towards how people should use and develop software — but not everyone subscribes to it.

To a beginner these all sound like they might do the same thing. Everyone is a committer, everyone is a maintainer. Branches and tags are unshared by default. To some degree, this level of decentralization is just a total paradigm shift. As others have mentioned, Mercurial is more moderate in this way. No argument from me about the learning curve and negative usability of the information model, documentation, and commands. A rubbish article, compare apple with orange.

Please identify what you are comparing? Distributed VCS or Centralised VCS, you should either present how SVN being use in distributed environment in order to compare with the Git having the same usage, or present how Git being used in centralized environment in order to compare with SVN having the same usage. This will benefit newcomers to learn how thy should use them and which tool is more suitable. I can even add: For my code to be safe I have to feel in cotrol of it.

With git I feel lost and insecure. How safe is my code? For my code to be safe I have to use a tool that safeguards my code from the ground up, and I must use that tool properly.

Feeling has little to do with it, but nevertheless, with git I feel powerful and completely in control. I have never lost any code with git.

Here is an assessment from my point of view. TFS — Hands down the best option for a large team of. NET developers working a single project. VSS — Total Crap! This is where I could go an a rant but I wont.

Just google VSS and you will see. SVN — Great open source source control. There is not a lot to complain about. As a mater of fact my home development server ran SVN for years. However, I did switch every thing over to git.

You have t have a server just like TFS. There is administration work involved just like TFS. GIT — Fast, Flexible, and easy. The are just like Github, but you can have free private repos as my remote. There are just more ways to manage them that are not really necessary for smaller teams. The other downside of git is that some folks are afraid of command line tools for some reason.

With git I feel this is not the case. And as a related note, who the hell thought it was a good idea to use the write bit of the permissions as the is-this-checked-out flag?! But yes, you can obviously script your way around anything for one local environment. By making it scriptable using the basic commands, they pretty much barred their way towards ever having a simple interface.

If they change one of their core commands in a backwards-incompatible way, all user-scripts break. And this already happened there was a simpler git ui whose name I forgot.

It tried to wrap around the git commands and regularly broke down because of some incompatible change. You can if you use good abstractions for your API. But that requires more careful thought up front…. Thank you for this.

I thought it was me. One day on a big project using GIT I got an email saying not to commit any changes or pull anything because the repository had been inadvertently reset to the state it was in three weeks before.

It appears a developer had returned from holiday and having found it hard to merge their work simply forced a push. This could be seen as an argument against git. I suspect Management would consider it an argument against holidays. Could someone point out what advantage a distributed version control system has over say a clustered SVN system? They should have enabled the non-ff option on the remote — which is there for precisely this reason.

The user workflow could be still improved further though: So then you wind up wasting time, forever, instead of just paying the up-front cost to design software properly. In terms of market share, Linux is a success, In terms of technical quality, Linux is an unmitigated disaster zone of low quality code, failed designs, and rewrites. Git was built to support that development model.

Pull Requests and the concomitant branch mania are a relic of the Github Era, not Git itself. I would agree, if I did not have Mercurial, which does everything git does while being easy to use. That is a serious reason not to use it. The whole point of a SCM is to protect your commits.

Anything else is just gravy. That might be ok in an open source situation, but totally unacceptable if you got paid to write it. Maybe when the SVN guys add shelving, the reasons people use git will go away, but I doubt that.

The reasons to choose it seem more political than anything else. If your safety system is reliant on everybody knowing everything the self-appointed best coder knows, and having a backup system that is somehow bullet proof and budget proof , and moreover that fear of dismissal is your primary data protection method, your company is quite clearly a terrible place to work.

I would argue everything from terminology to failsafe conditions could do with some renewed thought. Like the SA rifle, raw power is somewhat undermined if the tool sometimes kills the user. If you need to merge between branches, as far as I understand the information model of SVN is just not really good for that, and so you are continuously confronted with stupid behavior such as your own commits causing tree conflicts when you merge them back from trunk , and operations including deleting and renaming files can become quite destructive.

Worse, they often seem innocent when performed and become destructive when merged — e. SVN is really not at all suitable for distributed open-source development. I have not tried, but I can well imagine the excruciating pain and inconvenience that it would cause.

Having said that, for small, co-located teams undertaking typical development activities in an SME-like environment, particularly when one or more team-members do not have a software development background, ease of use is paramount. Create an organisation on github and your problems will be solved. It has been two years when I moved my colleagues onto git and github. None of them are programmers, and none of them had any experience of git or for that matter versioning systems.

Now everybody is happily working with no problems. The workflow is nothing more than git pull, git commit -a, git push. Sometimes new branches are created, then you need to say precisely onto which branch to push, and you need to use git checkout to switch between branches. In two years, I had only one cock-up, but after 10 minutes of swearing I fixed the problem in 10 minutes with no changes lost. The only thing which bugs me a little bit is merge commits in history.

I could do something about it, but I am too lazy. As a maintainer sometimes I had to dive into deeper waters, but I had no problem in finding information how to do that. I also prepared mini documentation for my users, which contains everything they need to know.

Yeah, the problem with advice like this, though, is that it comes from the perspective of a codebase maintainer. My frustrations mostly stem from having to adapt to existing GitHub repositories: What are you maintaining — documentation? Do you worry about keeping a clean history, encouraging your users to rebase? Well I call myself maintainer, but I am really only the most knowledgeable user, which pays for private repositories.

Several projects are running happily without my intervention, I only participate in them for my real work, statistical consulting. I work on statistical projects, and git is used for sharing R code and data. I do not worry about clean history, because I did not find it useful. As I said merge commits are annoying, but they do not interfere with blame history, so I can track who did what.

And yes I understand that my setting might not be typical. But my experience contradicts several of your points made. I think you highlighted an importan point here.

The ironic thing is your instructions and especially your diagram and helped me understand what is going on.. Because Git documentation tends to focus entirely on the relationship between your working directory and your local repository.

My impression is they instead each have their own repo and either email patches or ask others to pull from them. This quirk of how kernel development works means they do things differently from most git users.

But some UI decisions were made with the kernel workflow in mind. Free GIT repository hosting rizwaniqbal. If you do anything you cited on your article with the repository, you could always use reflog and checkout to a pre-destroyer push. History-editing operations like a bad rebase can also destroy data.

Well, I agree with garbage collection. But you can recover yourself from a push —mirror or a bad rebase checking your reflog unless it has already been garbage collected. But you have a point. Git is not a mere toy, and you have to take care of some things to make a good use of it. Git is not a mere toy. I totally agree with most of your statements.

So I like to be able to pull from my repo anywhere with connection. Because i have a home server, which is also my php server as well as my svn remote repo, I wanted to configure git with my current xampp virtual host setup.

But having been at it for around 4 days now, looking at a lot of tutorials and blogs about setting up serving git over http. It is just so damn complicated and not user friendly. Setting up svn under my xampp was extremely simple. I just gave up. Where is the repository and how big is it? However, you are being overly kind towards subversion. We have a rather messed up corporate intranet that makes interfacing with the remote svn server quite painful, especially the latency is an issue.

Branching and merging requires a single person dedicated to that job. That is both stupid and costly if you are used to git.

Our use of subversion is a poster child use case for why using subversion is a bad idea. But it is certainly a big part of it. Git-svn has liberated me from that madness. I can work for weeks on my private branches while keeping up to date with the latest changes in svn and routinely git svn dcommit large changesets.

I actually use a remote git repository for storing and sharing my branches and commits we have a nice github like facility where I work. Scares the hell out of my colleagues because they are not used to seeing that much change in a short time frame appearing in svn.

But I do my due diligence of making sure all tests pass before I dcommit so no harm gets done. Basically by the time I dcommit to svn, my work is in a releasable state. But Git is definitely hard to master and there is a quite high barrier for getting started. However, the reason git is rapidly replacing subversion as the vcs of choice most OSS projects and many corporations is that overall you are better off with git than with svn. It enables teams to change their work flows and not be blocked on a central resource i.

Changing the work flow is essential because it is entirely possible to use git like you would use svn, which is not a way you are going to get much out of git. Especially with larger teams changing the work flow is a very big deal.

I agree it may seem like overkill if you have been treating your VCS as a glorified file server that you use for backing up work in progress. Which is pretty much the way most engineers tend to use it sadly. But then, if you work that way, maybe you should consider using a version control system properly. Fear of branching and merging is a wide spread thing among subversion users. And for good reasons: The hard part of using git is learning to merge and branch properly and then unlearning that these activities are somehow dangerous, tedious and scary.

Everything is a branch in git. Your local repository is a branch. The remote git repository is a branch and if you use git-svn, svn trunk is just another branch.

Git is a merging and branching swiss army knife. So, if you are stuck with subversion, do your self a big favor and learn how to use git-svn. Yeah, I agree with basically all of that. SVN is easier to use, Git is more powerful. In my sector, SVN is basically dead — no one myself included really uses it by choice anymore. Distributed naming is very hard.

The moment you have to start adding disambiguating information, you start ending up with something that looks like a URL: If you want to create a short name, then it should be up to you to choose what you want as a short name.

If you relax that constraint with the fall back position of URLs then you could improve usability. Default alias "my-repo" would clash with existing repo "my-repo" http: But repositories do have local names. A repository name is the file system path to the repository. Yeah, I can understand why Svn might be getting in your way: My Svn experience has been in teams with between developers, using really simple branching strategies all development in trunk , so my experience with Svn has been very very positive overall.

I am currently using Hg, and with my current use-case can see no overwhelming advantage for one over the other. It is worth saying that much of the development that I have done has involved compute-intensive testing and performance measurement, typically run on a limited-availability resource High Performance Computer or cluster , driven by a Continuous Integration server. As a result, there was never really any practical alternative to the develop-everything-in-trunk branching strategy, and distributed development was never really an option either If the only way you have of performing a non-trivial test of your code is to run it on a centralized, shared compute resource.

It is also worth noting that developers were encouraged to check in several times per day, which further limited the scope of each individual change, and further reduced the probability that two changes would affect the same part of the same file. Indeed, its simplicity gives it a distinct advantage in these situations. For large, distributed teams on the other hand, where developers are forced to work independently anyway, Hg or Git are obviously more suited.

For large, co-located teams, such as the situation described above, I suspect that much of the pain of using Svn is related to the use of complex branching strategies, as well as the manner in which code is organized and development carried out.

Using Git may well reduce the amount of pain, but it seems that there are other, more fundamental problems that need to be addressed, such as how the code is organized, and why presumably independent changes are hitting the same lines of code.

Can you explain how something that saves everything is smaller than something that just saves differences? It makes no logical sense at least to me. I find Git much easier to use than SVN. I have a Q actually really looking for answers: And why not a working dir per local branch?

You can just clone one branch X if you wish, but you have to bear in mind that this branch might depend on commits from another branch Y i. Both Mercurial and git allow this. This concept is not exposed directly in Mercurial and is seldom necessary to worry about it there.

About one filesystem tree per branch, there is another DVCS, bzr, which actually takes this approach. With Mercurial, since local cloning is so easy, I also do this sometimes.

I have local clones for major branches the named branches and push and pull locally. I could not disagree more.. You can literally click in the bottom right, create a new branch, make changes, commit, switch back to another branch, merge, and never have to shutdown, restart, reconfigure, reset ENV paths, reset run paths, etc.

While idealism says the location of a project should never matter, when attempting to run large Enterprise apps, whether they be monolithic or micro-services based in nature, at some point your setup is going to rely on the specific location of your source code. The separate directories is one of the more annoying things about SVN. Not to mention that this is an invalid argument. Nothing stops someone from cloning a Git repository as many times as they want into as many directories as they want and changing the branches in each of the folders to be whichever branch they want.

You can run a whole distributed system locally without the need of a server. It also supports having to NOT have a different folder for each branch. SVN supports this with switch if you want to risk losing all your code. I recently switched from Subversion to Git. But this turns the conversation into an Apples vs. Git is a distributed version control system. As this is what Linus does. If we want to get extremely picky and attempt to compare SVN vs.

Git when adding a new file, these are the only commands you need:. I submit the anecdotal evidence that I have accidentally destroyed many hours of work by running commands that sounded totally reasonable but turned out to permanently destroy data. In the end I recloned, accepting the loss of time instead of investing much more time which I could use for recreating the work instead….

It is unfortunate you lost code. That is probably one of developers worst nightmare. Even if you go through and force deletes which can easily be locked out the data still exists in the Git Garbage Collection.

So the commit graph shows everything that has happened. Git rebase is used to replay the commits you have locally on top of commits that have been fetched from a remote graph. Think of it as time travel. If you were the only one that have experienced those events in time no biggie. If you check in a password and need to get rid of it.. And even if you do manage to remove something it is almost guaranteed to exist in the Git GC.

Git is distributed version control and subversion is not. Hence there are extra steps in commiting cause there is a remote repo to consider which is not necessary to update on every single commit. As for the then having to issue pull requests thats just again an extra step as its not needed on every commit.

So actually more often than not git has same number of steps in storing code changes as subversion. It also refers to making a frustrated and ignorant assumption about the current state of your local repo, index, or working directory. How does Github for Windows affect the equation? It is supposed to be a superior UI at least for Git. As far as i know, git was born out of necessity. Git was designed to be used for versioning the linux kernel which is not a software project that you and me work on daily basis.

All the hacks were not put in place but were designed into the core model. It does not go by mangoes to mangoes. If comparing is necessary then one could point out what Linus says in a talk that, subversion is the single most software that was started without a good reason.

Code is meant to be read. Rebasing lets you work however you want on your repo, then clean things up so that your commits are easier for others and your future self! This is a good thing. If you really wanted to preserve the original commits, you could probably work in one branch and then rebase onto another branch.

Fossil is good, which provides a fully implemented DVCS and could be simplified via autosync mode. The only weakness is it cannot handle huge source tree. Not stopping history rewrites and telling developers to use rebase is the quickest way to lose days of work weeks at a time. But for the first two points: Everytime I try to teach someone some basic git commands, nobody understands the staging thing the first time. I might be a bad teacher, but I think is just a little too abstract.

Secondly the poor cli syntax: Finally you can user whatever front-end you like to use with git. This is more tongue-in-cheek than anything else, but: Have a play creating blobs, trees, and commits: Definitely a pity that those messy concepts keep leaking up into the UI though — you see references to treeishes and refs and such all over the man pages. I personally heartily recommend a perusal of the documentation in many fine books about how git internals work. Once you realise its just a big graph, glued together with SHA1 and gzip, and all the functions are just different methods of mutating the graph, it seems much much simpler.

The problem is that this is probably not the case. So, there at least is some lock-in. For the record, I am one who loves hg but has found git difficult to work with and has also found hg-git to work very well.

Well, in Mercurial… http: That way you have all your bugs in your repo. And ideally you do the same for your documentation. That is why I never fully committed to totally learning Git. I am dead sure it will be replaced by something simpler in the future. Yeah, there could be some truth in this. It took something like Git to make everyone see the benefits and potential of DVCS — now someone just needs to refine it. The complexity serves a purpose, and is actually very elegant.

Until we end up with an iPad with Clippy running on it, that is useless for anyone with a working brain. But of course few people get to choose their VCS software anyway. Refusing to use Git vastly reduces the number of open source projects you can contribute to. More like a waste of time. Being decentralized is great and so are easy branches, etc. But the badly or apparently not at all designed user interface is still a problem, and not just because I have to read the man page every time I need to know which git reset flag I need.

This is a nice write-up, and many of your points are valid. The thing that I like about Git is that despite what you say in 5 there is a lot of abstraction and quite a few nice shortcuts.

For example, you can skip the whole staging process git add by passing the -a flag to git commit. But the SVN concepts were easier to grasp. An extra option flag bolted onto one command is not an elegant abstraction whereas a setting that completely hid the index from sight would be. There is a sweet spot for SVN for which it works quite well: You update, code, update again, resolve the rare conflict, and commit — simple and easy to understand.

If you want to create two separate patches for the same file, or you are asked to fix something in your patch, things will get messy quickly. On the other end of the scale, if you are a major contributor to a project of nontrivial size, which needs versioning and backwards compatibility, or has big rewrite efforts which should not conflict with the continuous maintenance going on in other words, you need multiple branches , then SVN starts sucking bad: Files disappear from the diff because a merged add is actually not an add but a copy , commits start conflicting with themselves, code is duplicated without any edit conflict, and worse.

Not to mention that the whole thing is excruciatingly slow. Why make an inscrutable prompt when there are already a great prompt built in to git-completion. As someone relatively new to git, the thing I find really messy is submodules. Concept is great but the implementation sucks. You really hit the nail on the head: Tools and apps with intuitive interfaces and workflows have been around for a couple of decades, at least. So expecting something to be straight-forward to operate is not an unrealistic dream.

But why do you want to make people change git for that? The complexity is harder to understand but if you have done so, it gives you much more power over your repository. But if you make it simpler, it will automatically destroy the advantages people love it for. There are many ways that the usability of Git could be improved without decreasing its power at all.

Yeah, I sometimes use EasyGit — but it definitely runs a risk of creating even more commands to remember. It should give pointers to the Git developers though. I think the problem with feature branches is not that you can use them, but that you have to. Feature branches are great, when you do somewhat larger development. But they are a needless complexity when you just want to do some smaller changes.

It does not allow you do to add names to you changes, but enforces that — even for the most trivial changes. In projects where many people work simultaneously from a given base either large or small and merge only after many smaller changes, that forcing does no harm , because they would want to separate and name their changes anyway.

But there, persistent naming would actually be more helpful, so people can later retrieve the information, why a given change was added. The information is lost. So the forcing does not help for large projects and does harm for small projects. I like the way you explained certain things but the use of illustrative diagrams is more exciting…thanks.

Git makes simple things hard and has a completely unintuitive CLI. Frontends like EGit do not help much. Mercurial is DVCS done right, from the usability point of view.

So the really interesting question is, why is Git so popular, when there is an equivalent but better alternative? Good strategic option likely not planned, though: Most times the maintainer chooses the DVCS, so cater to the maintainer to get spread around. Linus scratched his itch. Other maintainers had the same itch. Now the users are itching all over, but they do not get to choose the DVCS except with hg-git.

The way I like to put it is, Git was written by Martians. All your points are dead on right. You only need onw git command and a 2nd version control to get by: The drawback is, that you dont find my tag in any git based project.

Try to download Android Cyanogenmod source. You will likely need a day even with a fast internet line. I love git, but I agree that some of the commands are difficult to understand, and man pages for commands like rebase are awful.

Pull fetches all branches, but only merges or rebases the the currently checked out branch HEAD , push pushes ALL your branches to any matching remote branches unless you specify which branch you want to push to.

Git could definitely use a UI make over. I also agree that Git has a steep learning curve. I also like how open source projects use pull requests, which are infinitely better than the SVN equivalent: I also have trouble without the index when going back to SVN. It doesn't seem like a super useful feature until you go back to a system without it.

For those of you who that think that power has to come with a crappy user interface, take this post I wrote a year ago as an example of something that is an obvious mistake in the UI design. It just makes it harder to learn. Even SVN is too difficult if you need to persuade non-software-developers to commit their code, so I agree with your thesis and then some!

The philosophy behind Git and most DVCS out there is that you need that complexity to manage the problem of version control effectively. Man pages suck, but I reckon Git has about the best docs out there.

There is a huge community, a lot of books, etc. Not sure what the example is about, is that a regular workflow for you? Seems like an exceptional flow. The Git model is clearly appreciated by contributors, as seen by the massive adoption. What helps maintainers is also good for contributors. I just press commit and push. Yes this is scary about Git. There is a back-up to get most changes back. Although sometimes useful, I feel this ease of destroying history is a golden rule Git broke.

I do think in a DVCS world it makes some sense to clean up a bit. I like the IBM Jazz model a lot. Git is by no means a perfect tool. But it is a massive improvement over SVN, especially for distributed open source development. If you need a complex tool to manage the problem, then why can Mercurial do it as well while staying simple? In that case you should just use it for some time — and have a look at hg-git, which provides you a transparent bridge.

So everything you can do it git can be done in hg, too. But most things are much easier. Let me know when you find such a system. The kind that uses the best tool for the job.

I stand by my statement that git is the most powerful. You can moan about it all you want, but it works and it works really well. But then, it will still take some time. But it would completely invalidate any argument you can make about the power of git. You can blame hacker news for me being here http: This is precisely the reason I went with hg. I plan to get my feet wet with the latter and if it sticks, remove svn from my vernacular forever. When using a DVCS and getting used to even the basics, going backwards is like taking a trip back to the stone age.

You mean I have to be connected to the internet to commit? The remote server has to be up? Hg Workbench combines basically every UI dialog into one location which is amazingly useful. If I could choose it would always be Hg but there are things I pine for in Git, well specifically one: But there is one complaint I could add:. No ability to edit log messages after the fact. In SVN you can if the repository admin enables it tweak log messages after the fact, and it turns out to be really handy.

It has many other uses as well. Because of it, editing log messages post facto is a conceptually consistent thing to do. Safe, easy, propagating history rewriting. Can I use this as my homepage? OK git has a steep learning curve, so we got someone to teach us, we mad some videos and made themn freely available here http: The combination does a very good job, obviates the need for command line and has returned far more than the learning cost and smart git licences: There are built-in tools in git for that.

This not only prevents from accidental or intentional history rewriting but also keeps your sources safe if your git server gets hacked.

GPG does not solve 9. It just means pushing the cost of scaling onto the contributors. If you do not use the —rebase option when you pull the contributions, how does the rewrite get into your repository? Or, if you are particularly strict and you probably should be, if you are pining for svn , you could use —ff-only. If someone else does a rebase of stuff you already pulled, your history gets garbled, because you now have duplicate history.

If you send a pull request and the other one does a rebase, you have to prune out your copy — and rebase everything you did on top of the changes to the other changes. Honestly, I did not read very single word of your post. To be fare, the learning curve is much deeper than svn. This is not complete because the tool itself, but because it introduce lots idea of versioning control which is seem odd in other svn.

The workflow of versioning in git is quite different, and this workflow looks complicated at first, but once we get used to it, we find lots rational. Now I have come to the crossroads in my life. I always knew what the right path was. Without exception, I knew, but I never took it. It was too damn hard. He has chosen a path. Let him continue on his journey. Personally, I use the guide at http: I disagree that git does not provide meaningful abstractions.

The abstractions git provides are very much like the abstractions provided by a file system — and once you understand them, they are simple to work with.

I dislike some of the defaults that git provides for example, I always have people I am working with put in their global config: But my distaste for those defaults is a reflection of the admin system provided by github, and git was designed before github.

Finally, comparing git to svn is sort of like comparing google maps with a tumblr. I can be totally in awe of the aesthetics of the tumblr and I can say all I want about how google maps does not have those design characteristics. And, in doing so, I would be pretty much missing the point of why people would like either one of them.

Thank you for this well-timed post. I am familiar with at least 9 source control systems, and I am paid-administrator-proficient with 4 of those, including ClearCase and Perforce.

Git is such a mighty pain in the rear that I just gave up ever working on any open source project that uses it. Its learning curve is a sucktastic cliff and you really do need to know almost everything about it before you can stop being dangerous. The tool should not be more complicated and inconsistent than the programming language source it protects.

But two branches in the same repository and you do not want to consider the target branch as the current branch? Why would you even want to do that?

If you know git internals pretty well, you could probably find a way to forge a merge commit in memory without checking them out. Linus never ever studied properly even having such learning grands like Tannebaum. But I fully agree that Mercurial achieves the same as git without introducing the same kind of complex interface. That does not make git a bad tool. It just makes it inferior — but heavily hyped.

Even the first hate-thing is enough to not to read further for each one who really tried git. Other 9 are also a crap of subjective shit, decorated with lovely graphics: No more command line syntax to remember and it shows you a nice log on all the branches and past commits. Just read your post for the first time, complete with the August update. But even back then, decentralized version control was not some new field of research. Linus was indeed bold to write his own system from scratch, and its success speaks for itself — but part of the reason he was free to do that was that he was the only constituent he had to satisfy.

But even if we had seriously considered it, I think SVN would still be a centralized system today. The MediaWiki team has been going through a lot of pain, adopting Gerrit as a code review and Git management tool. I guess my feeling in all this is that Git or something equally powerful is necessary, but not sufficient, for effective code management in distributed teams.

Besides the basic version control functions, intuitive and secure are the essential features of a version control tool. If the tool cannot even manage the source code safely, then why use it. This new-fangled world with distributed workflows, and the new ideas that come with it are too much. In other words, more than half of the rant is a person used to wheelchair complaining about complexity of using wings.

The rant is only useful to Git maintainers really, just as a reminder that their user base is growing very diverse. I have a feeling you approached git expecting it to be like svn and got disappointed. They are used for completely different workflows. They are suitable for completely different workflows. Try managing Linux kernel with svn…. If you have a 10 developer project maybe git is overkill, but if your project grows enough then git is the best choice around. Just a note about Github, many people use pull requests PR as part of their daily workflow.

Feel free to open one and discuss your plans. This process gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits inside the goals for the language and tools. It also checks that the design is sound before code is written; the code review tool is not the place for high-level discussions.

When planning work, please note that the Go project follows a six-month development cycle. The latter half of each cycle is a three-month feature freeze during which only bug fixes and documentation updates are accepted. New contributions can be sent during a feature freeze, but they will not be merged until the freeze is over. Significant changes to the language, libraries, or tools must go through the change proposal process before they can be accepted.

Sensitive security-related issues only! First-time contributors that are already familiar with the GitHub flow are encouraged to use the same process for Go contributions. Even though Go maintainers use Gerrit for code review, a bot called Gopherbot has been created to sync GitHub pull requests to Gerrit.

Open a pull request as you normally would. Gopherbot will create a corresponding Gerrit change and post a link to it on your GitHub pull request; updates to the pull request will also get reflected in the Gerrit change.

When somebody comments on the change, their comment will be also posted in your pull request, so you will get a notification. It is not possible to fully sync Gerrit and GitHub, at least at the moment, so we recommend learning Gerrit. It's different but powerful and familiarity with help you understand the flow.

In addition to a recent Go installation, you need to have a local copy of the source checked out from the correct repository. Either clone from go. Each Go change must be made in a separate branch, created from the master branch. You can use the normal git commands to create a branch and add changes to the staging area:. You can edit the commit description in your favorite editor as usual. The git codereview change command will automatically add a unique Change-Id line near the bottom.

That line is used by Gerrit to match successive uploads of the same change. Do not edit or delete it.

A Change-Id looks like this:. The tool also checks that you've run go fmt over the source code, and that the commit message follows the suggested format.

If you need to edit the files again, you can stage the new changes and re-run git codereview change: Make sure that you always keep a single commit in each branch. If you add more commits by mistake, you can use git rebase to squash them together into a single one.

You've written and tested your code , but before sending code out for review, run all the tests for the whole tree to make sure the changes don't break other packages or programs:. To build under Windows use all. After running for a while and printing a lot of testing output, the command should finish by printing,.

You can use make. See also the section on how to test your changes quickly. Once the change is ready and tested over the whole tree, send it for review. This is done with the mail sub-command which, despite its name, doesn't directly mail anything; it just sends the change to Gerrit:.

Gerrit assigns your change a number and URL, which git codereview mail will print, something like:. If you get an error instead, check the Troubleshooting mail errors section. If your change relates to an open GitHub issue and you have followed the suggested commit message format , the issue will be updated in a few minutes by a bot, linking your Gerrit change to it in the comments.

Go maintainers will review your code on Gerrit, and you will get notifications via e-mail. You can see the review on Gerrit and comment on them there. You can also reply using e-mail if you prefer.

If you need to revise your change after the review, edit the files in the same branch you previously created, add them to the Git staging area, and then amend the commit with git codereview change:.

If you don't need to change the commit description, just save and exit from the editor. Remember not to touch the special Change-Id line.

Again, make sure that you always keep a single commit in each branch. The first line of the change description is conventionally a short one-line summary of the change, prefixed by the primary affected package. The rest of the description elaborates and should provide context for the change and explain what it does. Write in complete sentences with correct punctuation, just like for your comments in Go. Add any relevant information, such as benchmark data if the change affects performance.

The benchcmp tool is conventionally used to format benchmark data for change descriptions. The special notation "Fixes " associates the change with issue in the Go issue tracker. When this change is eventually applied, the issue tracker will automatically mark the issue as fixed. If the change is a partial step towards the resolution of the issue, uses the notation "Updates ". This will leave a comment in the issue linking back to the change in Gerrit, but it will not close the issue when the change is applied.

If you are sending a change against a subrepository, you must use the fully-qualified syntax supported by GitHub to make sure the change is linked to the issue in the main repository, not the subrepository. All issues are tracked in the main repository's issue tracker.

This section explains the review process in detail and how to approach reviews after a change has been mailed. When a change is sent to Gerrit, it is usually triaged within a few days. A maintainer will have a look and provide some initial review that for first-time contributors usually focuses on basic cosmetics and common mistakes. These include things like:. After an initial reading of your change, maintainers will trigger trybots, a cluster of servers that will run the full test suite on several different architectures.

Most trybots complete in a few minutes, at which point a link will be posted in Gerrit where you can see the results. If the trybot run fails, follow the link and check the full logs of the platforms on which the tests failed. Try to understand what broke, update your patch to fix it, and upload again.

Maintainers will trigger a new trybot run to see if the problem was fixed. Sometimes, the tree can be broken on some platforms for a few hours; if the failure reported by the trybot doesn't seem related to your patch, go to the Build Dashboard and check if the same failure appears in other recent commits on the same platform.

In this case, feel free to write a comment in Gerrit to mention that the failure is unrelated to your change, to help maintainers understand the situation. The Go community values very thorough reviews. Think of each review comment like a ticket: After you update the change, go through the review comments and make sure to reply to every one.

You can click the "Done" button to reply indicating that you've implemented the reviewer's suggestion; otherwise, click on "Reply" and explain why you have not, or what you have done instead. It is perfectly normal for changes to go through several round of reviews, with one or more reviewers making new comments every time and then waiting for an updated change before reviewing again.

This cycle happens even for experienced contributors, so don't be discouraged by it. As they near a decision, reviewers will make a "vote" on your change.

Login into the Git Server with user root.