[cross-post][long] svn workflow for fpga development

Hi everyone,

this is not really a question, but more some food for thoughts on how to optimize an fpga development flow leveraging the benefit of a vcs.

I've been using svn since sometime now and I cannot imagine a world without a vcs under your belt when dealing with complex systems in a collaborative environment.

In the past I've always followed two simple rules: 1. never break the trunk 2. commit often

The reasons behind these two rules are quite simple and I stick to them because from them it derives a set of additional rules which allow you to respect the one above.

The first rule is there because without it anyone new in the team cannot simply checkout a copy and rely on it. This implies that commits to the trunk are performed when a feature or a bugfix is completed and does not break compatibility with the rest of the system.

Allowing a newcomer to easily get into the flow alleviates hours of training, but even experienced designers do need to rely on a stable version to perform their regression testing and/or add features to it.

The second rule is needed because the very purpose of a vcs is to track changes as regularly as possible so that you (or somebody else) can roll back and forth and use what is most appropriate at any given time.

Apparently the two rules are in contraddiction since the first advocates few commits (only the good ones) and the second lots of commits (even broken ones). To benefit from both of the two worlds 'branching' come to rescue. When branching the trunk we set up our working environment in a clean way and we can mess up with it as long as we want, until the feature/bugfix is complete. Only at that point we can merge it back into the trunk so that the trunk gets all the goodies we worked on.

Working on a branch leaves (no pun intended) the trunk safe from exploration efforts and allows everyone to rely upon it. After all, an healthy trunk means an healthy tree.

The pitfall here is that too often people branch and then never sync with the evolving trunk, so when it is time to merge it is a hell! Syncing regularly with the trunk allows to keep the branch on track without the risk to fight when it is time to close the branch and move on.

I have deliberately left out of the flow the concept of tagging because that is not adding too much to this conversation and might be treated separately (maybe another thread ;-)).

If you find this flow somehow lacking important features and/or missing completely essential points which are relevant to fpga development flow I'd appreciate if you could share your thoughts.

Al

--
A: Because it messes up the order in which people normally read text. 
Q: Why is top-posting such a bad thing? 
A: Top-posting. 
Q: What is the most annoying thing on usenet and in e-mail?
Reply to
alb
Loading thread data ...

I'll add my 2 cents. Rule 1 is not as important in my experience. Having a regular regression (nightly, weekly, whatever), that runs on a clean checkout of the trunk is more important. Then you can easily identify what broke the trunk. The version control tools allow's everyone to continue working (by unmerging the offending commit).

Rules 2 is spot on. Check in early, check in often.

Working on branches is great, but as you indicated, merging sucks. The tools (SVN at least) allow easy branching, but are shitty when it comes to merges.

If you've got a version control tool that allows for easy merging back to the trunk, then you've got the best of all.

Atria (clearcase) really did this well, but was hell for admin, and performance sucked.

I've no experience with the more modern distributed vcs (git, mercurial). Someone else hopefully will chime in here on how well they handle branches and merges...

--Mark

Reply to
Mark Curry

So, what do you put on the trunk and what's on the branches?

One strategy which works for some types of system and some ways of team working is: - to have the trunk for development - whenever you reach a milestone of some sort, create a branch containing the milestone - whenever the regression tests have passed, check in to the trunk (hopefully frequently) - if it is necessary to save when regression tests have not been passed, checkin to a branch

Thus - the head of the trunk always contains the latest working system. - significant historical waypoints can be found on a branch

Reply to
Tom Gardner

Hi Mark,

In comp.arch.fpga Mark Curry wrote: []

and this is where I wanted to go in the first place, but there's no regression running regularly. Indeed the whole verification process is a bit of a mess, but that's another story! [1]

In the event you *do not* have a regular regression you may now understand why rule 1 is in place. Without it, any commit may break the trunk silently until the next guy performs a sim and nothing works anymore.

Without a regression in place every commit can break the trunk and it will take time before the issue comes out. Being able to branch and commit without fearing to break anything allows much more freedom to the developer to trace her/his changes.

this one is vital. You want to keep track of what you are doing, but you need to preserve the team sanity as well, so it would be too risky to allow everyone to commit early and often on a trunk.

This happens because people branch and do not sync with the trunk regularly. I'm quite interested in understanding the technical reasons behind the merging issue in svn, sure is that without this possibility there's no way out of using only the trunk.

The reason for this post is because I'm trying to raise arguments to convince people in the team to use svn and to profit from it. Proposing yet another tool to somebody who doesn't even see why we need a vcs is pointless and will only undermine the whole effort.

I'm tempted about moving on to git, but I guess that it would be already a big success if the team accepts a fair usage of svn. On my own, I'll probably experiment with git-svn which seems to be a valuable tool to profit of git performances in an svn environment.

Al

[1] I'll soon post something about regression testing...and yes, that's a menace!
Reply to
alb

In my experience merging in SVN works fine if every file is touched only by a single person. It only breaks in this situation:

- Alice and Bob check out the latest trunk or create local branchges for themselves

- they start making their changes

- Bob is done and commits a change to file XYZ or merges his changed branch to the trunk.

- Alice also works on her stuff, but also on file XYZ, but on the version that she originally checked out, not the one Bob has changed in the meantime.

- Now Alice wants to merge her changes. Amongst others, her merge includes a changed version of file XYZ. But her version does not have the changes Bob made earlier. So now the SVN client sees that that file's revision in her branch that this new file is based on is older than the one in the trunk. Committing her file would probably mean reversing Bob's changes to XYZ, which could be intentional or a mistake. There's no way for the client to know for sure, so it just quits and complains and forces the user to manually decide which changes to apply. If this happens for a single file in a merge, the entire merge doesn't work, because half a merge would probably break everything.

As long as every file is edited exclusively by one person, there should be no problem (at least I haven't had any). In past projects, this was mostly a problem with top-level files. Everyone is responsible for a module they work on exclusively, but if they add or change ports of that module, that has to be accounted for up through the hierarchy. So instead of changing the top-level-file, they should inform the person responsible for that file that the ports need to be changed. This is additional overhead but better than breaking merges completely; it never was a big problem for us, but I see it can be problematic in bigger projects with more people.

I guess (don't know since I haven't looked into it) git and mercurial have some sort of mechanism to lock files or portions of files or track every detail you change, so during a merge the client can do step-by-step modifications of shared files, which maybe can resolve more uncertainties.

Have fun, Sean

Reply to
Sean Durkin

And this is the strength of distributed version control : rules 1 and 2 are not mutually incompatible.

formatting link
for a rather (OK extremely!) biased account of the difference; any good links giving the reverse side of the story?

(Mercurial, but Git is similar)

You clone the trunk into a local repo, where you checkin broken code as often as you want. At last, when you are done and the new feature is working, you have a consistent set of changes you can push back to the trunk without breaking it.

If someone else has pushed to the trunk, you end up with two heads in the trunk, which require merging. Which isn't too bad in Mercurial but occasionally involves manual intervention.

The simplest way to deal with this, I find, is to *pull* the new head from the trunk, merge locally, re-test your change, then push.

(Yes you can branch as well, and I do that for stable versions, i.e. releases, but cloning the repo is the most convenient way to branch).

- Brian

Reply to
Brian Drummond

Hi Tom, (I'm answering reposting to vhdl as well to keep the thread

cross-posted)

Tom Gardner wrote: []

The trunk has only merges from the branches [1]. When you start a project from scratch you certainly need to start from the trunk, but that situation lasts a very small amount of time (few days if not less) and as soon as possible you branch from it.

Branches are - only - for two reasons [2]:

  1. adding features
  2. removing bugs/problems

both of them start from the trunk and both of them should merge back in the trunk.

Is the trunk always functioning? If this is the case it cannot be really for development since every commit should be done only when the whole changeset is working (tested).

this is what I call 'tagging'. Yes it is a branch, but its purpose is to freeze the status of your development at some known state. These tags are essential when multiple repositories are pointing at each other for the purpose of reuse. You can point to version 1.2.3 and stick to it or follow its evolution to whatever version.

so the regression test are run from your local copy? or from the branches? I think I missed this part.

uhm...at this point you branched to fix something while the trunk is being developed. When do you decide that is time to merge back into the trunk?

This contraddicts your earlier statement or I might have misunderstood what you mean by keeping the trunk for development. If no one pair of consecutive commits breaks the trunk than we are on the same page, but while I'm suggesting to branch and therefore commit even if you break something, you are suggesting that noone should commit to the trunk unless her/his piece is working (system wise).

Yes, including the ones that broke your regression tests...

[1] The only one exception is for modifications which are straight forward and do not require more than few minutes of work. [2] refactoring is more delicate since it requires a solid regression suite in place to be sure that functionality is not affected
Reply to
alb

Hi Chris, Chris Higgs wrote: []

I think this graph does answer your question:

formatting link

'Living is easy with eyes closed' - John Lennon (Amen!)

Behind conservative enterprises there are conservative managers by choice and/or by nature, but this is not the whole story! Some fields are more conservative than others (defense, aerospace, ...). A distributed version control while more attractive for its rich sets of features it may scare away whoever feels the lack of control behind it.

The Cathedral vs. the Bazaar' is a nice essay by E. Raymond, but its advocacy for the bazaar style that brought the *nix community where it is nowadays does not necessarily apply in a much more controlled regime like you may have on a project for the Department of Defense.

I fully agree with you here, that would be my next item on my personal agenda...but revolutionary change requires time and patience ;-).

The main problem I see currently is the lack of 'command line' mindset in the designers mindset. They are all too used to graphical interfaces and manual wave tracing (sigh!). I suspect it is an habit related to the complexity they used to handle, which does not fit well nowadays systems.

Together with building a regression environment we should train people on a different verification model and workflow. Anyhow Continuous Integration is built around version control, so I need to get this fixed before moving on.

I agree here as well, but the software development world has an infrastructure around it that hardware designers unfortunately do not have. On my *nix box I can install nearly any type of software in a matter of seconds, look at the sources, discuss with the authors and benefit of a storm of developers who are constantly improving the quality of these product. They sell support and make a living out of it.

In the hardware world instead we close everything, live behind patrolled fences and sustain licensing policies which are close to insanity, essentially limiting progress and fair competition.

Al

Reply to
alb

Thanks; I don't subscribe to that and my reader won't let me cross-post to a group to which I'm not subscribed.

The key is continuous incremental integration checked by solid comprehensive automated tests.

My problem with your statement is that I don't agree with [1]. Merges of any kind require regression testing before re-merging. The issue then become if your merge conflicts with another merge.

That is trapped by having a *continual* background automated build and regression test on the head of the trunk. When a merge causes a failure, it is picked up quickly. Traditionally there is a screen visible to everybody that shows the build status as red or green. Collective groans are emitted when it becomes red, and sponge balls may be thrown at the perpetrator - who keeps the balls ready to throw at the next malefactor :)

Since failure is picked up quickly, it is usually easy to determine what caused the failure, and to correct it.

That's a damn sight easier to deal with than the major inconsistencies that creep in if merging/re-integration only occur occasionally.

The major danger is that seeing a green status light can lull the unwary into thinking that there are no problems. Naturally the "quality" of the greenness is completely dependent on the quality of the tests.

In my experience [2] (i.e. refactoring) is the tail that wags this dog. But all development can be regarded as refactoring, e.g. refactoring an unimplemented feature into a functioning feature.

Agreed, but I don't see the benefit of a branch in that process. Trunk->workspace->trunk.

Yes, and yes.

The head of the trunk is frequently re-built and re-tested in its entirety - continuous incremental integration and regression testing.

Agreed.

On your local copy whenever convenient for you, and continually on the head of the trunk.

Whenever convenient for you. Preferably merge to the trunk several times a day.

Since the deltas are small there are unlikely to be major problems; when there occur there are not many places where the culprit could be.

Correct. Why save and publish something that is broken?

Those should only be in your private workspace.

Ah, the infamous "this change is so small it can't possibly break anything"!

All development is refactoring.

Reply to
Tom Gardner

In my own experience, I've found it's far easier to lead by example than ba ttle the internal corporate structure - I soon got tired of arguing!

If the company is wedded to out-dated version control software I'll still u se git locally. There are often wrappers[1] that make interfacing easy. I 'll run GitLab to provide myself a nice HTTP code/diff browser etc. If th ere's no bug-tracker(!!) I'll use GitLab issues to track things locally. If the company has no regression, I'll run a Jenkins server on my box. If te sts aren't scripted, I'll spend some time writing some Makefiles. If the t ests aren't self-checking, I'll gradually add some pass/fail criteria so th e tests become useful. I'll then start plotting graphs for things like simu lation coverage, FPGA resource utilisation etc. using Jenkins.

Unless you're working in an extremely restrictive environment with no contr ol over your development box, none of this requires sign-off from the power s that be. You'll find other developers and then management are suddenly c urious to know how you can spot only a few minutes after they've checked so mething in that the resource utilisation for their block has doubled... or how you can say with such confidence that a certain feature has never been tested in simulation. Once they see the nice web interface of Jenkins and the pretty graphs, understand the ease with which you can see what's happen ing in the repository, they'll soon be asking for you to centralise your de velopment set-up so they can all benefit :)

Chris

[1]
formatting link

PS apologies for breaking the cross-post again... curse GG

Reply to
Chris Higgs

Hi Chris,

Chris Higgs wrote: []

Leading by example is certainly far more powerful than arguing, I agree. The problem is that in an environment where your hours are counted for each activity you are carrying on it becomes obvious that I should take all these activities back home and do them on my 'spare' time. The good thing is that I find these activities quite amusing and enjoy a lot in building these kind of environment.

I'll certainly give a shot to git-svn, as far as code/diff browsing, I'm far too used to emacs and I consider html browsing quite cumbersome (no keyboard bindings! How can you live without key bindings!).

We do have bugzilla but people are not using it effectively so there's not really a list of bugs, rather a list of 'actions' which are assigned to a specific person. In this way you have no chance to check what bugs other people have and you cannot even contribute to them (I know it sounds pretty silly!).

This is my secret plan indeed, but you need to be careful though, you do not want to step on somebody else's foot! Moreover I'm not specifically asked to do so, therefore I need to sneak these activities in the middle of my 'official' ones.

At least I managed to install a vbox on my windoz station and now I'm practically behind my fence ;-)

news.individual.net charges you 10$ a year... A reasonable price to get rid of GG!

Reply to
alb

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.