Quantcast
Channel: Perforce Blog Feed
Viewing all 1361 articles
Browse latest View live

What Is SVN?

$
0
0
What Is SVN?
Thumbnail
jpieper September 8, 2017
Version Control

For years, Apache Subversion - or SVN - was a very popular version control system. Then came 2010, and Git started to gain popularity. Despite Git's ongoing popularity, Subversion is still widely used and has a solid user base.

This blog post acts as a foundation on top of which you can build more deeper knowledge. If you want to learn more about how to use and host SVN, check out our Subversion tutorial post.

History

Around late 1990's CVS (Concurrent Versions System) was very widely used in software development for both open source and commercial projects. However, CVS had started to receive a lot of criticism and was considered not to meet the standards of the era. For example, CVS had poor support for third party tools, and it had no support for http/https/ssh protocols. So, a better system was needed.

In 2000, Collabnet started to develop a version control system in order to replace CVS. As a result, Subversion was introduced. It covered many of the features in CVS but also introduced new features that CVS was missing, such as atomic commands and the ability to rename and move versioned files.

Even though SVN was initiated already in 2000, version 1.0 wasn't published until February 2004. SVN became an Apache project in November 2009 when it was accepted to the Apache Incubator.

After SVN was introduced to the world, CVS withered away quite fast and it has not published new versions since 2008. SVN is still in development and a new version of it is expected on 2017.

SVN Is a Centralized Version Control System

Software developers use version control for storing and tracking changes in different types of files, such as source code and documentation. This enables multiple developers to work effectively on the same codebase without messing up each others' work. If somebody makes a mistake or something unexpected happens, version control ensures that the latest working version of the code can be restored.

Version control systems can be roughly divided into two categories: distributed version control systems (DVCS) and centralized version control systems (CVC). SVN falls into the latter category.

Note: Unlike SVN, Helix Core supports both centralized version control and distributed version control (DVCS). Learn more about Helix Core's DVCS capabilities.

A centralized version control system means that the version history is stored in a central server. When a developer wants to make changes to certain files, they check out the files from the central repository to their own computer. After the developer has made changes, they commit the changes back to the central repository. The complete revision history in the repository is not copied to the developer's computer, as it would be in a distributed version control system.

Challenges with SVN

The most common complaint that developers have towards SVN is its tedious branching model. Branches allow you to work on multiple versions of your code simultaneously. In SVN, branches are created as directories inside the repository and this directory structure is the reason why developers are less than fond of Subversion's branching model.

Subversion's version 1.6 introduced a concept called tree conflicts. Tree conflicts are conflicts are caused by changes in the directory structure, such as renaming or deleting files. Since changes in the directory structure are quite common, tree conflicts occur relatively often. This obviously adds some complexity to using branches in SVN because SVN doesn't allow you to commit your changes if there is a tree conflict. In comparison, Git's branches are just references to certain commits in history, which is a lot more lightweight approach.

Note: Helix Core offers a similarly easy approach to branching called Streams, aka branches with brains.

SVN's centralized nature has also some shortcomings. Since the version history is stored on a central server, you pretty much have a classic case of "all eggs in one basket". If the central server goes down or is maintained, no commits or checkouts can be done. What is even worse is that if the server breaks down beyond repair, you'll lose the history altogether. Unless you have proper backups in place. Therefore, remember to take constant backups of your repositories, store them on a separate server, and make sure that they work!

In addition, a centralized version control system requires you to be connected to the central repository in order to commit. At this point, it is good to repeat the ancient wisdom of version control: "Commit early, commit often". Keeping this wisdom in mind, it seems that using SVN without a connection to the central repository is kind of pointless. For example, if you often code during flights, remember that SVN will not allow you to make smaller saves (commits) to the central repository before you have restored the connection. In distributed version control systems, no connection is needed for the majority of actions. This also makes most of the actions on distributed systems a bit faster.

Benefits of Using SVN

This may seem a bit contradictory, but SVN's centralized nature is not only a threat to the data security. Security is actually one of the greatest benefits of SVN.

As the version history is stored in a central server in SVN, you can argue that it is more secure than on distributed systems. The reason for this is that the history cannot be changed without access to the server. Thus, a developer working on their own local working copy cannot do damage to the version history by accident.

Another benefit of Subversion is that it, by default, allows you to manage read and write permissions on a repository and even on a file level out of the box. This gives you an extra level of security since you have the control on who can do what on your SVN repositories. With proper tools, you can do this also for Git and Mercurial. For example, with Helix TeamHub you can manage access rights per project and even per repository and branch level.

SVN works also well with large files. The usual problem with adding large files to version control systems is that the version history of the large files tends to consume a lot of disk space. Now, in distributed version control systems - like Git and Mercurial - the whole version history is downloaded to the developer's computer when they clone the repository. As a result, also the complete version history of the large files follows and makes the cloning slower and slower over time as the versions accumulate. Of course, you can use workarounds like Git LFS or similar in distributed systems.

SVN's workflow works better by default for versioning large files. Since a developer checks out only the latest changes in the repository, the complete version history is not copied to the developer's computer. Therefore, the downloaded files are much smaller than they would be if they included the complete version history of the large files.

Are you interested in a more detailed comparison of Git and Subversion? Read more on Git vs SVN.

Usage

As I mentioned earlier, the open source community and modern development teams have been favouring Git over SVN during the last 3-5 years. However, Subversion is still widely used, especially in the corporate world. The obvious reason for this is that SVN has been around since 2004, and many organisations that adopted it already then and are still using it.

Due to SVN's suitability for storing large files, many of the SVN users work with graphical assets, videos, and similar. Good examples of this are the game development industryor web design. SVN is also fairly simple to learn, so also non-developers can grasp it quite easily, making it a better fit for teams with different levels of technical know-how.

Conclusion

My goal with this blog post was to give you some context for you to understand what Subversion is and what it is used for. If you are interested in learning more, I suggest that you visit this guide on Subversion. And when you are ready to set up your first repositories, Helix TeamHub offers free hosting for Subversion in the cloud.

On

List of Equivalent Commands in Git, Mercurial, and Subversion

$
0
0
List of Equivalent Commands in Git, Mercurial, and Subversion
Thumbnail
jpieper September 8, 2017
Version Control

Whether you work in collaboration with other developers or alone, you need version control to track the changes you or others have made to your code.

The most commonly used open source version control systems are GitMercurial, and Subversion. There are a lot of differences between these three. The main difference is that Subversion is a centralized version control system. It uses a central server to store all files and requires developers to check out the repository, which they update from and commit to. Git and Mercurial, on the other hand, are distributed version control systems. As a result, the latter have more commands in common with each other than with Subversion.

Read how to host SVNGit or Mercurial repositories behind your firewall.

However, similarities still exist, despite the different workflows. Although the commands used are rarely exactly the same, there are equivalent commands between these three version control systems. So, for your pleasure, we are providing you with a list — a cheat sheet if you will — of some of the most used commands in Git, Mercurial, and Subversion.

You will find this particularly useful if you are migrating from one version control system to another and need to learn the ropes in your new system. 

The list below is by no means exhaustive. We'd love to hear what types of similarities you have spotted in these version control systems. Or even if you disagree with the commands.

GitMercurialSubversion
git add hg add  (only if the file is not tracked yet)svn add  (only if the file is not tracked yet)
git blame hg blame svn blame
git show :hg cat -r svn cat -r
git clone hg clone svn checkout
git commit -ahg commitsvn commit
git rm hg rm svn delete
git diff hg diff svn diff
git show HEAD:hg cat -r rev svn list
git mergehg mergesvn merge
git checkout hg revert svn revert
git checkout HEADhg update tipsvn switch or svn revert
git checkout hg update svn switch
git statushg statussvn status
git pullhg pull -usvn update
git inithg init .svnadmin create
git fetchhg pullsvn update
git reset --hardhg revert -a --no-backupsvn checkout -r url://path/to/repo
git stashhg shelveNo equivalent, may be released in SVN 1.10?, possibly in 2017
git revert hg backout svn merge -r UPREV:LOWREV . undo range

svn merge -c -REV . undo single revision

Which version control system suits your needs? Read Git vs. SVN or Git vs. Mercurial to get an unbiased overview of these three systems before you make your choice. And of course, you can take them all for a test drive if you sign up for Helix TeamHub because we support all three.

On

Storing Large Binary Files in Git Repositories

$
0
0
Storing Large Binary Files in Git Repositories
Thumbnail
jpieper September 8, 2017
Repository Management
Version Control

How to store large binary files in git repositories

Storing large binary files in Git repositories seems to be a bottleneck for many Git users. Because of the decentralized nature of Git, which means every developer has the full change history on his or her computer, changes in large binary files cause Git repositories to grow by the size of the file in question every time the file is changed and the change is committed. The growth directly affects the amount of data end users need to retrieve when they need to clone the repository. Storing a snapshot of a virtual machine image, changing its state and storing the new state to a Git repository would grow the repository size approximately with the size of the respective snapshots. If this is day-to-day operation in your team, it might be that you are already feeling the pain from overly swollen Git repositories.

Luckily there are multiple 3rd party implementations that will try to solve the problem, many of them using similar paradigm as a solution. In this blog post I will go through seven alternative approaches for handling large binary files in Git repositories with respective their pros and cons. I will conclude the post with some personal thoughts on choosing appropriate solution.

git-annex

Git-annex works by storing the contents of files being tracked by it to separate location. What is stored into the repository, is a symlink to the to the key under the separate location. In order to share the large binary files between a team for example the tracked files need to be stored to a different backend. At the time of writing (23rd of July 2015): S3 (Amazon S3, and other compatible services), Amazon Glacier, bup, ddar, gcrypt, directory, rsync, webdav, tahoe, web, bittorrent, xmpp backends were available. Ability to store contents in a remote of your own devising via hooks is also supported.

Git-annex uses separate commands for checking out and committing files, which makes its learning curve bit steeper than other alternatives that rely on filters. Git-annex has been written in haskell, and the majority of it is licensed under the GPL, version 3 or higher. Because git-annex uses symlinks, Windows users are forced to use a special direct mode that makes usage more unintuitive.

Latest version of git-annex at the time of writing is 5.20150710, released on 10th of July 2015, and the earliest article I found from their website was dated 2010. Both facts would state that the project is quite mature.

Pros:

  • Supports multiple remotes that you can store the binaries. See here.
  • Can be used without support from hosting provider. See here.

Cons:

  • Windows support in beta. See here.
  • Users need to learn separate commands for day-to-day work

Project home page: https://git-annex.branchable.com/

Git Large File Storage (Git LFS)

The core Git LFS idea is that instead of writing large blobs to a Git repository, only a pointer file is written. The blobs are written to a separate server using the Git LFS HTTP API. The API endpoint can be configured based on the remote which allows multiple Git LFS servers to be used. Git LFS requires a specific server implementation to communicate with. An open source reference server implementation as well as at least another server implementation is available. The storage can be offloaded by the Git LFS server to cloud services such as S3 or pretty much anything else if you implement the server yourself.

Git LFS uses filter based approach meaning that you only need to specify the tracked files with one command, and it handles rest of it invisibly. Good side about this approach is the ease of use, however there is currently a performance penalty because of how Git works internally. Git LFS is licensed under MIT license and is written in Go and the binaries are available for Mac, FreeBSD, Linux, Windows. The version of Git LFS is 0.5.2 at the time of writing, which suggests it's still in quite early shape, however at the time of writing there were 36 contributors to the project. However as the version number is still below 1, changes to APIs for example can be expected.

Pros:

  • Github behind it.
  • Ready binaries available to multiple operating systems.
  • Easy to use.
  • Transparent usage.

Cons:

  • Requires a custom server implementation to work.
  • API not stable yet.
  • Performance penalty.

Project home page: https://git-lfs.github.com/

git-bigfiles - Git for big files

The goals of git-bigfiles are pretty noble, making life bearable for people using Git on projects hosting very large files and merging back as many changes as possible into upstream Git once they’re of acceptable quality. Git-bigfiles is a fork of Git, however the project seems to be dead for some time. Git-bigfiles is is developed using the same technology stack as Git and is licensed with GNU General Public License version 2 (some parts of it are under different licenses, compatible with the GPLv2).

Pros:

  • If the changes would be backported, they would be supported by native Git operations.

Cons:

  • Project is dead.
  • Fork of Git which might make it non-compatible.
  • Allows configuring threshold of file size only when tracking what is considered a large file.

Project home page: http://caca.zoy.org/wiki/git-bigfiles

git-fat

git-fat works in similar manner as git lfs. Large files can be tracked using filters in .gitattributes file. The large files are stored to any remote that can be connected through rsync. Git-fat is licensed under BSD 2 license. Git-fat is developed in Python which creates more dependencies for Windows users to install. However the installation itself is straightforward with pip. At the time of writing git-fat has 13 contributors and latest commit was made on 25th of March 2015.

Pros:

  • Transparent usage

Cons:

  • Supports only rsync as backend.

Project home page: https://github.com/jedbrown/git-fat

git-media

Licensed under MIT license and supporting similar workflow as the above mentioned alternatives git lfs and git-fat, git media is probably the oldest of the solutions available. Git-media uses the similar filter approach and it supports Amazon's S3, local filesystem path, SCP, atmos and WebDAV as backend for storing large files. Git-media is written in Ruby which makes installation on Windows not so straightforward. The project has 9 contributors in GitHub, but latest activity was nearly a year ago at the time of writing.

Pros:

  • Supports multiple backends
  • Transparent usage

Cons:

  • No longer developed.
  • Ambiguous commands (e.g. git update-index --really refresh).
  • Not fully Windows compatible.

Project home page: https://github.com/alebedev/git-media

git-bigstore

Git-bigstore was initially implemented as an alternative to git-media. It works similarly as the others above by storing a filter property to .gitattributes for certain type of files. It supports Amazon S3, Google Cloud Storage, or Rackspace Cloud account as backends for storing binary files. git-bigstore claims to improve the stability when collaborating between multiple people. Git-bigstore is licensed under Apache 2.0 license. As git-bigstore does not use symlinks, it should be more compatible with Windows. Git-bigstore is written in Python and requires Python 2.7+ which means Windows users might need an extra step during installation. Latest commit to the project’s GitHub repository at the time of writing was made on April 20th, 2015 and there is one contributor in the project.

Pros:

  • Requires only Python 2.7+
  • Transparent

Cons:

  • Only cloud based storages supported at the moment.

Project home page: https://github.com/lionheart/git-bigstore

git-sym

Git-sym is the newest player in the field, offering an alternative to how large files are stored and linked in git-lfs, git-annex, git-fat and git-media. Instead of calculating the checksums of the tracked large files, git-sym relies on URIs. As opposed to its rivals that store also the checksum, git-sym only stores the symlinks in the git repository. The benefits of git-sym are thus performance as well as ability to symlink whole directories. Because of its nature, the main downfall is that it does not guarantee data integrity. Git-sym is used using separate commands. Git-sym also requires Ruby which makes it more tedious to install on Windows. The project has one contributor according to its project home page.

Pros:

  • Performance compared to solutions based on filters.
  • Support for multiple backends.

Cons:

  • Does not guarantee data integrity.
  • Complex commands.

Project home page: https://github.com/cdunn2001/git-sym

Conclusion

There are multiple ways to handle large files in Git repositories and many of them use nearly similar workflows and ways to handle the files. Some of the solutions listed are no longer developed actively, thus if I were to choose a solution currently, I would go with git-annex as it has the largest community and support for various backends. If Windows support or transparency is a must-have requirement and you are okay living with the performance penalty, I would go with Git LFS, as it will likely have long term support because of its connections with GitHub.

How have you solved the problem of storing large files in git repositories? Which of the aforementioned solutions have you used in production?

On

How to Choose the Right Git-Powered Wiki for Your Team

$
0
0
How to Choose the Right Git-Powered Wiki for Your Team
Thumbnail
jpieper September 12, 2017
Repository Management
Version Control

Git-Powered Wiki Comparison: Helix TeamHub, GitLab, GitHub, and BitBucket

A while back we compared the issue tracking features of four major SCM tools. Since people were quite interested in the comparison, we decided to continue the series with a similar post about Git-powered wiki features. After all, an effective and reliable documentation is a must-have for any software development team.

If you do not want to read the detailed breakdown of the wiki features and their characteristics, you can jump straight to the summary table.

What Is a Git-Powered Wiki?

Let's get the definitions straight before we dive into the comparison. A Git-powered wiki is a wiki that stores its contents and change history in a Git repository. "Errare humanum est," so every now and then somebody mistakenly adds or removes things in the wiki. Thus the change history is a good thing to have because it allows you to restore earlier versions of the documents. Additionally, storing wiki contents in a Git repository allows you to clone the repository locally, edit the content with your preferred text editor, and integrate tools that auto-generate documentation from code to the repository.

Now that we are on the same page, let's move on to the actual comparison!

Helix TeamHub

Thumbnail

First, we need to clarify that unlike GitLab and GitHub, Helix TeamHub allows users to create multiple repositories in one project. BitBucket allows multiple repositories per project as well, but wikis are repository-bound in BitBucket. In Helix TeamHub, the wiki is project-bound. Other tools limit the number of repositories per project to one. This, of course, affects project-bound documentation. E.g., in Helix TeamHub, you'll find the wiki's Git repository cloning link in the repository view, not the wiki view.

Documentation in Helix TeamHub differentiates itself from other SCM tools with a few distinctive features. The first thing you'll notice is the side-by-side view in Helix TeamHub's wiki editor. This handy feature makes the editor easy to use, especially if you are not a seasoned Markdown expert. Side-by-side views will save inexperienced Markdown users a lot of frustration. 

The side-by-side view is also useful because Helix TeamHub lacks the function bar buttons that other tools have. However, you can easily consult the Markdown syntax via the link below the editor if you don't remember the syntax by heart.

As I mentioned earlier, you can access the wiki's Git repository through the repository view. Here you'll also be able to clone the wiki's Git repository.

Thumbnail

In the repository view, you can also access the version history at the code level.

Thumbnail

Since other tools in this overview store the wiki content in a Git repository, attaching large binary files may not be a good idea in the long run. Helix TeamHub solves this by supporting WebDAV repositories. You cannot manage the WebDAV repository from Helix TeamHub's wiki, but you can link the file from WebDAV to the wiki page. Note: WebDAV doesn't version the documents stored in it so you will probably want to handle versioning through naming conventions.

Summary of Helix TeamHub

Helix TeamHub's wiki is easy to use and gets the job done. Having side-by-side wiki editor view is an excellent way to facilitate project documentation and reference Markdown syntax. Search function for the wiki pages is also a valuable feature.

GitLab

Thumbnail

GitLab's wiki editor shares certain similarities with GitHub's and BitBucket's. It is intuitive to use and includes function bar buttons for formatting. Additionally, you'll find the cloning link in the wiki from a top navigation bar under "Git Access".

Gitlab Flavored Markdown (GFM)allows users to make some additional formatting like emojis.

Thumbnail

In addition to Markdown, GitLab also supports RDoc and AsciiDoc documentation generators.

Summary of GitLab

Visual layout and the extended Markdown syntax makes GitLab's wiki editor nice to use. Support for documentation generators Rdoc and AsciiDoc is a plus. GitLab's top bar navigation has its supporters, but many also prefer left-hand navigation. Those who do, may take some time getting used to navigating to the right sub-pages to find things like the cloning link. 

GitHub

Thumbnail

GitHub's navigation is slightly more intuitive than the navigation in GitLab. Maybe it is the sidebar that lists all of the wiki pages you have created for your current project that makes the difference. There is also a customizable sidebar and footer, which remains the same on every wiki page. This allows you to add, e.g. guidelines for your wiki or make your own navigation paths.

In Github, the cloning link is visible on every wiki page. In my opinion, this is a good thing to have, because now you'll always find the link in a blink of an eye. In other tools you need click a link or find the right repository in order to find the cloning link.

Thumbnail

A noteworthy detail in GitHub is its support for multiple syntaxes and document generators. In addition to Markdown, RDoc and AsciiDoc, which were supported also in GitLab, GitHub supports CreoleMediaWikiOrg-modePlain Old Documentation (Pod) for you Perl programmers, Textile, and reStructuredText.

I did encounter quite a big disadvantage with GitHub wiki: you can only add images by adding an image link. Unlike in other wiki editors, you cannot just upload a picture to the editor; in GitHub, you need to add an image URL. For other types of attachment, there isn't even a dedicated link or button for adding attachments. Basically, this leaves you with two options: either you upload your files to another website and link them to your GitHub project, or you clone the repo, add the attachments, and commit them. Both of these feel a tad too complex when the other tools let you just drag and drop files.

Summary of GitHub

It is easy to navigate in GitHub's wiki tool and using the editor is simple. You can choose from multiple different markup languages but at the end of the day, you usually need just one. Complexity in adding attachments is a big minus.

BitBucket

Thumbnail

I need to admit that at first, I was a bit lost with BitBucket. Since I'm not that used to using it, it took me a while to understand that you need to enable the wiki feature in the repository settings. After enabling it, you are ready to create documentation for your repository.

The editor is similar to GitLab and GitHub. Nothing too fancy or complex there. Cloning link is easy to find from the top right corner and it is visible on every wiki page. History can be found easily and you can also check the lines of code by clicking the SHAs in the history view.

An interesting feature in BitBucket is that if you create a Mercurial repository, the repository-bound wiki is naturally stored in a Mercurial repository as well.. This results apparently from the fact that wikis in BitBucket are repository bound, although one project may include multiple repositories.

What comes to text formats, BitBucket supports Markdown, Creole, reStructuredText, and Textile.

Thumbnail

Bitbucket has a nice parent-child folder type of approach to navigation. In order to find all of the pages, you'll need to click the parent folder, which in this case is the repository name in the Wiki view. Then you'll be able to find a list of the wiki pages in alphabetical order. If you happen to have tons of documents, a search option would have been nice to have.

Thumbnail

You'll also face some complexity if you want to attach files other than images to your BitBucket wiki. You can just drag and drop image files but there's no simple "Attachments" button or a link for any other type of attachment. Similarly to GitHub, your options are 1) to upload them to an external webpage and link to it, or 2) to clone the repo, add attachments, and commit them.

Summary of BitBucket

I find it counter-intuitive to separately enable wiki from the repository settings. On the other hand, if you do not want to have a wiki in your project, this is a good feature. Furthermore, adding attachments should be easier, and a search function for the wiki pages would improve the user experience remarkably.

Conclusion

All of the reviewed git-powered wikis have a lot of similarities, such as intuitive editors and nice UIs. However, differences do exist. Since everybody values different features, I listed the key functionalities in the table below to help you find out the differences more easily.

Please share your thoughts on social media and tell us, what functionalities you appreciate most in a wiki tool.

Thumbnail

Want to learn more about why Helix TeamHub is the no. 1 alternative to GitLab, Github, or BitBucket? Check out these sites:

On

A Battle-Tested DevOps Platform

$
0
0
A Battle-Tested DevOps Platform
Thumbnail
jpieper September 12, 2017
Waterfall
Continuous Delivery
DevOps
Agile

Improving Your Development Workflow

A fellow software developer asked a relatively important question on Reddit a while back: How can we improve our development workflow?

The question was backed by a detailed description of their current workflow and the original poster's ideas about the methods they should be using. People contributed very informative comments to the discussion, so we recommend this thread to anyone interested in creating more effective software development workflows.

The poster wasn't sure how to introduce DevOps to his organization and what tools were necessary to do so successfully. Obviously, there is no shortage of tools out there that have their role to play as you build out your DevOps pipeline, but in this post, we'll focus on building a DevOps platform on top of which you'll be able to layer your other tools.

DevOps in Short

To truly succeed at DevOps, you have to embrace an entire culture — one that spans various departments in your organization as well as the entire lifecycle of your software. And that's the short version of it.

Having the proper tools in place paves the way to earning the right to say you do DevOps. But tools alone won’t magically turn you into a DevOps organization overnight. Having said that, the proper tools are essential to making DevOps work, the primary goal being that your software in development is ready to be deployed at all times.

The longer version?

You need to get your Ops team on board.

That’s because DevOps done right breaks down traditional organizational silos. But it also requires Ops to embrace the very thing they’re conditioned to disdain: change. And yes, breaking down communication barriers between developers, IT, testers, and business executives presents you with a legitimate obstacle to adoption.

But if you succeed, you reap the benefits of having a single, seamless entity where increased communication leads to better collaboration and creates added value for the customer — faster.

Of course, DevOps isn't the first methodology to aim to speed up software delivery. So what makes DevOps so different from Waterfallor Agile?

The Evolution from Waterfall to Agile to DevOps

In the Waterfall model, you first define the outcome according to customer needs, and then the development process runs until a "finished" product is released. The biggest problem with this approach is that the customer's needs usually change during the development phase. You end up delivering software that doesn't meet their shifting needs, or you spend a lot of time and money changing your plans mid-development.

The next step in the evolution is the Agile model. In Agile, the idea is to develop software in small iterations and be able to adapt to your customer's changing needs faster than in Waterfall. However, this model has its hitches as well:

  • Budget goals and deadlines are often missed.
  • Completed software components are incompatible.
  • New features break old functions.
  • Huge silos keep development and IT operations create animosity.

DevOps fulfills the promise of Agile, bringing the same kind of Agility to more parts of the development and deployment process. With continuous integration (CI) and continuous delivery (CD) pipelines, you can release often, the releases actually work, and your products meet customer needs — even exceed expectations! Cross-departmental cooperation ensures tools and processes streamline development instead of forming bottlenecks. With the right tools, you can enable automation and increase transparency throughout the duration of the project.

Thumbnail

The picture above represents how a DevOps production model works.

As you can see, business needs are the starting point for the DevOps model: understanding your customer's needs and planning work accordingly. Once you define an initial set of requirements, software development can start. Production then runs continuously: automated testing and deployment allows new versions to be released in short intervals. And the whole cycle is traceable to ensure that everything runs smoothly.

If — and when — needs change during development, they are easy to implement without starting over from scratch. The revised needs are communicated and documented in the requirements management tool, and from there they get delivered to the development team so they can implement the change. Once again, automation does its thing to ensure that each new change works once integrated into the software and can be deployed fast.

To ensure a constant feedback loop between stakeholders, effective communication tools are vital.

Building a Battle-Tested DevOps Pipeline

There is no mythical behemoth that can take care of every single aspect of your DevOps pipeline. To fully support your nascent DevOps culture, you need a platform that supports multiple tools.

The DevOps platform presented below is battle-tested in development organizations that have thousands of users in industries ranging from finance to logistics.

Starting with business needs means you need a tool that can handle both requirements management and general project management. Helix offers a complete suite of Agile project management tools including requirements management, issue management, and test case management, and it integrates seamlessly with JIRA.

When it comes to development, you'll be using a variety of tools. For version control, Helix TeamHub is the best choice because it allows you to:

  • Delegate IT’s tasks to project owners with self-service project administration.
  • Solve problems and communicate solutions using project-based wikis and other collaboration tools.
  • Host multiple repository types under a single platform including Git, SVN, Mercurial, Maven, and Ivy.
  • Integrate with all of the tools mentioned in this article to implement your DevOps pipeline.
  • Manage source code, large binary files, and build artifacts under a single platform using Helix TeamHub Enterprise with Helix Core.
  • Run 40-80% faster builds in Helix TeamHub Enterprise, powered by Helix4Git.

Continuous Integration

Once you have a version control system robust enough to power your DevOps initiative and a development environment in place, it’s time to consider what you’ll do as new code rolls in. That means it’s time for a little continuous integration. For that, one of the most popular tools in use today is Jenkins.

Continuous Testing

You can perform unit tests with JUnit, which is a simple framework for running repeatable tests where code tests code. SonarQubehas been proven to work phenomenally for static code analysis. Many use tools like Artifactoryto store build artifacts, but another great option is to use Maven or Ivy since they can be managed in the same project as your source code using Helix TeamHub.

Automated acceptance testing is run with Robot Framework. It has an easy-to-use, tabular test data syntax, and it utilizes a keyword-driven testing approach.

Continuous Deployment

Production deployments and configuration management are taken care of by Ansible, can take care of production deployments and configuration management. It’s a simple IT automation engine that automates cloud provisioning, configuration management, application deployment, and intra-service orchestration. For software containerization, our chosen tool is Docker.

Monitoring Your Progress

Finally, you can track the entire process with Zabbix, which is designed for real-time monitoring in the enterprise. It works even if you need millions of metrics from thousands of servers. You can also use Grafanato visually display metrics on clean, informative dashboards.

Let's combine these tools and put them on the right places in the DevOps production model:

 

Thumbnail

Conclusion

The original poster in Reddit, who wanted to streamline their software development process should definitely look into DevOps. However, the tools are only one part of a DevOps organisation and setting up the platform I've described above doesn't instantly make your organisation a DevOps advocate. You'll need a complete cultural change and commitment from the whole organisation.

If you have already started the cultural shift towards DevOps and want to set up the best possible DevOps environment, just use the example above. You can get started by signing up to Helix TeamHub!

On

Your Git Repository in a Database: Pluggable Backends in libgit2

$
0
0
Your Git Repository in a Database: Pluggable Backends in libgit2
Thumbnail
jpieper September 12, 2017
Repository Management

Git has a well-known, well-defined structure for how it stores data. In the .git directory of every Git repository you can expect to find certain things: objects for the data, refs for the branch and tag pointers, and so on. Additionally, everything gets stored in flat files, though some formats are a bit more involved than others.

However, it turns out this is not the only way you can store data in a Git repository. You can actually use a relational, or NoSQL database; an in-memory data structure; or something like Amazon S3. The pluggable backends provided by the libgit2 library make all of this possible.

What This Means

Using alternative Git storage solutions is probably most interesting for services or products that provide Git hosting like we do at Perforce. Use cases for hosting providers include:

  • Caching Git data for lightning-fast access by using either an in-memory backend or a Memcached or Redis backend with fallbacks to traditional file storage.
  • Building a fault-tolerant storage solution, or even a multi-site replication solution by storing data in a modern database system designed for this purpose such as VoldemortRiak, or Cassandra.

Outside of hosting, there are several possible use cases for pluggable storage when incorporating Git access to tools and libraries.

The Two Datastores of a Git Repository

Git repositories aren't that complicated, though you would never know it by looking at Git's UI. Git repos are comprised of just two structures, upon which everything is based: object databases and ref databases. 

The Object Database

The object database is where all the data is stored:

  • The contents of all files
  • The structures of directories
  • Commits
  • Everything 

However, what's remarkable about the object database is that it's essentially nothing but a key-value store.

Git stores data in the object database using a hash-based retrieval, meaning that the keys of the store are the (SHA1) hashes of the values. That has some rather interesting implications: The values in the object database are essentially immutable and you don't need an update operation.

Thumbnail

What's left is a basic data structure with essentially four operations:

get_keys()
read(key_or_prefix)
add(key, value)
delete(key)

It's easy to see you don't necessarily need flat file storage to implement something like this! Git's default, file-based object database is just one implementation of the abstract concept.

The Ref Database

The ref database stores a Git repository's references — the branches, tags, and HEAD.

Just like the object database, the ref database is also essentially a key-value store. The keys are the identifiers of the references, and the values are SHA1 hashes, which in turn correspond to commit objects in the object database.

Thumbnail

The values of a ref database are mutable, which is a key difference when compared to the object database. The commit that master points to may change over time. That means there's a slight difference in the operations that a ref database must provide:

get_keys()
read(key)
write(key, value)
rename(old_key, new_key)
delete(key)

Libgit2

Libgit2 is an implementation of Git written in pure C. It's designed to be an alternative to the Git reference implementation, providing easy linkage to other libraries and applications. It is actually the basis of the Git language bindings in many programming languages.

One of the less advertised features of libgit2 is that it has pluggable backends, which means that instead of storing the object database and the ref database in the way Git usually does it – in flat files – you can provide your own backend implementation and do whatever you want. Let's see how that works.

The Libgit2 Object Database Backend

The libgit2 object database code accesses data through functions in a C struct git_odb_backend, defined in git2/sys/odb_backend.h. It basically has the functions described above, with some additional functions for convenience (reading object headers only, streaming access, writing a packfile).

There are two built-in implementations for this struct that ship with libgit2. They implement the two object storage formats that Git traditionally supports:

  • odb_loose implements the loose file format backend. It accesses each object in a separate file within the objects directory, with the name of each file corresponding to the SHA1 hash of its contents.
  • odb_pack implements the packfile backend. It accesses the objects in Git packfiles, which is a file format used for both space-efficient storage of objects and for transferring the objects when pushing or pulling.

As you create a Git object database, you can provide any instance of the git_odb_backend struct, including a custom-built one. This lets you plug in your own implementations, as we'll see later in this article.

The Libgit2 Ref Database Backend

You can also provide a custom backend for the ref database, resulting in a potentially flat file-free Git repository. The technique libgit2 uses for this is essentially the same as with the object database. There is a struct git_refdb_backend, defined in git2/sys/refdb_backend.h, with functions for the different access operations.

There is just one implementation of the ref database backend that ships with libgit2: The file system backend refdb_fs, which accesses the refs in the refs directory of a repository.

Existing Alternative Backends

In addition to the built-in backends already mentioned, the libgit2-backends repository maintained by the libgit2 team provides a few custom object database backends:

These are not only useful by themselves, but they also provide a nice starting point for writing a custom backend of your own.

Setting It Up

Let's look at how to actually use these alternative backends.

What you would usually do when using the built-in backends would be to invoke git_repository_open with the file system path containing the usual .git directory contents, such as the loose object database, the packfiles, and the refs.

What we need to do instead when using custom backends is to invoke git_repository_wrap_odb , providing our own object database with a custom backend.

Let's say we have custom backends written for the Voldemort database, with the following constructor functions:

int git_odb_backend_voldemort(git_odb_backend **backend_out, git_repository *repo, const char *repo_id, const char *bootstrap_url, const char *store_name);
int git_refdb_backend_voldemort(git_refdb_backend **backend_out, git_repository *repo, git_refdb *refdb, const char *bootstrap_url, const char *store_name);

Here's how we can set up a Git repository backed by those backends:

git_repository    *repo;
git_odb           *odb;
git_odb_backend   *voldemort_odb_backend;
git_refdb         *refdb;
git_refdb_backend *voldemort_refdb_backend;
int               error = 0;

error = git_odb_new(&odb);
if (!error)
  error = git_repository_wrap_odb(&repo, odb);
if (!error)
  error = git_odb_backend_voldemort(&voldemort_odb_backend, repo, "my_repo", "tcp://localhost:6666", "git_odb");
if (!error)
  error = git_odb_add_backend(odb, voldemort_odb_backend, 1);
if (!error)
  error = git_refdb_new(&refdb, repo);
if (!error)
  error = git_refdb_backend_voldemort(&voldemort_refdb_backend, refdb, "my_repo", "tcp://localhost:6666", "git_refdb");
if (!error)
  error = git_refdb_set_backend(refdb, voldemort_refdb_backend);
if (!error)
  git_repository_set_refdb(repo, refdb);
  • On line 8 we construct an object database without any backends.
  • On line 10 we construct a Git repository backing this object database.
  • On line 12 we construct the Voldemort object database backend.
  • On line 14 we plug in the Voldemort object database backend to the object database. Object databases support multiple backends, and the order in which lookups are done is based on a priority number. We give the Voldemort backend priority 1.
  • On line 16 we construct a ref database without any backends.
  • On line 18 we construct the Voldemort ref database backend, just like we did with the object database.
  • On line 20 we plug in the Voldemort ref database backend to the ref database.
  • On line 22 we finally plug in the ref database to our repository, and we have a functioning repository we can read and write to.

In place of the Voldemort backends, you could also use one of your own implementations or one of the existing custom implementations from libgit2-backends. You could even provide multiple custom object database backends by adding them with different priorities. This can come in very handy when implementing caching, for example.

If you're not working in raw C, you can take a look at all the language bindings based on libgit2 to see how you might be able to achieve this in your programming language. 

On

Jenkins Integration to Feature Branch Workflow

$
0
0
Jenkins Integration to Feature Branch Workflow
Thumbnail
jpieper September 12, 2017
Version Control
Branching
Continuous Delivery

We wrote earlier about how code reviews work in Helix TeamHub. The blog post covered how the successful Git branching model and its derivates can be implemented efficiently in Helix TeamHub with code reviews and mandatory approvals. In this blog post, we are going to build another feedback cycle on top of the basic code review workflow that relies only on another human reviewer. The new feedback cycle is feedback from automatic build tools such as Jenkins. This blog post covers how to set up a feature branch workflow that requires a successful (green) build from Jenkins before the feature branch can be merged into the target branch in Helix TeamHub.

Initial Setup

This post assumes that we are in a situation where we left off at the previous blog post. In the previous post, we set up a Helix TeamHub project and created a new Git repository. We also added some feature branches to the Git repository.

What we want to do next is to set up a Jenkins job to be triggered for the feature branches in this repository. We also want Jenkins not just to execute the build, but also to notify Helix TeamHub whether the build succeeded or failed. To set this up, we need to:

  1. Set up a Helix TeamHub bot account for programmatic access.
  2. Set up Helix TeamHub Jenkins hook for triggering the Jenkins build.
  3. Configure the Jenkins job to build feature branches.
  4. Install and configure Jenkins Helix TeamHub Plugin.

1. Set up a bot account for programmatic access.

Helix TeamHub has a unique concept called bot accounts, or simply bots. Bots are used for external access to Helix TeamHub APIs as well as version control systems. Configuring a continuous integration server is an ideal example for using bots instead of your personal credentials.

Setting up a bot account in Helix TeamHub is done through the bots' UI. Bots can be created by users who have access to Helix TeamHub. When a bot is created, the creator becomes the owner of the bot. Multiple users can share ownership of bot accounts. In this example, you don't need to assign owners or members of the bot, but simply add the bot to the project.

Thumbnail

Adding a bot to a given project is done in the team view of the project. For this purpose, we assign the bot as a guest, since only read access to the code is required, and bots with guest access are able to publish build events. For a complete description of bot credentials, see the Helix TeamHub user guide.

Thumbnail

2. Set up Jenkins hook for triggering the build.

Next, we want to set up a Jenkins hook for triggering our build for each new change to our feature branches. Hooks are managed in the project hooks view. Helix TeamHub supports over 75 hooks to different services. After adding a Helix TeamHub Jenkins hook to a Git repository, a commit hook posts a request to http://yourserver/git/notifyCommit after each change. You can get more information in the Jenkins Git Plugin documentation.

Thumbnail

3. Configure a Jenkins Job.

After we have everything set up from the Helix TeamHub point of view, let's jump into Jenkins and create a new job, and configure necessary job-related settings.

Add the clone URL for the repository to the job configuration. Helix TeamHub supports both SSH and HTTPS protocols for repository access and while SSH is typically preferred over HTTPS, in this example we use HTTPS for simplicity.

Configure branches to build to refs/heads/features/**. This makes Jenkins run the job only upon changes to branches that are prefixed with features/, e.g. features/login, features/logging, features/foo, and so on.

Thumbnail

Next, add your bot credentials to the configuration. You can find the bot credentials in Helix TeamHub, either from the project team view or in the company bots view, by clicking the cogwheel icon next to the bot name.

Thumbnail

In order for the Helix TeamHub Jenkins hook to start the execution of this job, you will need to enable the Poll SCM option in the Job configuration. No polling schedule is required to be set, however. This is needed for Jenkins to distinguish which builds should be started upon changes and which should not.

Setting up the actual build steps are project specific and are thus skipped in this post. 

4. Configure the Jenkins Helix TeamHub Plugin.

The Helix TeamHub Jenkins Plugin can be installed from the Jenkins plugin manager. It should be configured according to the plugin. After installing and configuring the plugin, there should be a new post build action available in the Jenkins job configuration named Helix TeamHub notification.

In order to configure Jenkins to send the build information successfully, you need to add the Helix TeamHub notification post build action and configure the account key of the bot account to it. This way, the Helix TeamHub Jenkins Plugin utilizes the correct credentials when creating the event. After setting the post build action and saving the Jenkins job settings, everything is set for proper testing.

Testing the Setup

We can now test the setup by creating a feature branch named features/new-feature. We can create a couple of commits to the branch, and push it to Helix TeamHub. Once the branch is pushed we can create a new code review with the branch against the master branch and choose the Require passing build option.

Choosing the require passing build option disables merging the changes through the Helix TeamHub web interface until there is a successful build notification sent to Helix TeamHub for the given branch. The changes can be still merged manually via command line, which is sometimes useful.

Thumbnail

When we click the Create button and open the code review, we should see that the Build event is already green and that merging the changes is possible. Naturally, when doing actual automated builds and verifications, the build might take longer, and the successful or unsuccessful status would only be shown after the build is executed.

Thumbnail

Conclusion

Setting up a Jenkins integration with your feature branch workflow provides a quality gate for new features under development. In this pos we explained how to set it up in Helix TeamHub and Git repositories. Similar workflows can also be achieved in Mercurial repositories.

If you would like to test the functionality yourself, sign up for free.

On

How to Host Subversion (SVN)

$
0
0
How to Host Subversion (SVN)
Thumbnail
jbartush September 12, 2017
Repository Management
Version Control

A Complete Beginner’s Guide to SVN Version Control

What is SVN?

Subversion is a centralized version control system for managing versioned files such as source code, web pages, and documentation. This article assumes that you, dear reader, are familiar with version control systems. If not, the shorthand definition of a version control solution is any system that records and stores file changes, allowing revisions to be made from a specific point in your development history.

Thumbnail

 

Subversion is also open source versus commercial option, such as Helix Core. Helix Core is selected by many for its scale, performance, and support. These features can be more difficult to achieve with open source tools. 

SVN Defined:

Here are some helpful terms and concepts on SVN hosting.

SVN Repository

A repository is where the code and its history are stored. The repo can be accessed various ways depending on the server where it’s hosted, either from an organization’s internal server (on-premises) or from an external, web client (SaaS cloud server).

Because SVN is centralized, development teams always have one central repository for one or many local checkouts, with any and all changes able to be contributed back to the central repo as soon as they’re ready.

SVN Hosting

For teams that want a cloud hosting option to handle their large-scale repository management, SVN hosting services allow teams to create a repo in the cloud, manage its access rights, and control everything as you would from an internal SVN server, minus the cost of maintenance and management. Cloud hosting services are becoming increasingly popular, as teams prioritize development over added IT infrastructure and cost.

However, on-premises SVN hosting is the choice for many organizations, too, as cloud hosting often lacks the comprehensive security and stability features of an organizationally managed solution. With valuable IP at risk, many development teams opt for an on-premises solution to manage access controls, repo and branch-level permissions, and ensure developer uptime.

Commits

Commits are the saved state of code changes for specific points in time. A commit is literally saving your development progress; every commit is a mile marker on your product’s roadmap.

Commits in SVN are done between the local checkout and central repository. Changes are committed to the central repo. Each commit includes both the changes and a commit message, which gives details on the changes you’re introducing.

$ svn commit  –m “removed old file ‘feature x’ .”

Deleting                         feature x

Committed revision 2.

The more explanatory your commit messages, the more visibility and insight you'll gain from your code changes. What’s more, commit messages can provide clarity to past or archived changes that would be confusing without context.

SVN Lifecycle

For the most part, everything you do within SVN follows a development pattern, or lifecycle. Here’s a quick rundown of what that roadmap looks like:

  • Checkout a repository
  • Perform changes
  • Review changes
  • Revert changes
  • Resolve conflicts

Checkout

You’re ready to make great products. You’ve got your developer hat on. Let’s get started.

Before you can make any changes, you must first check out a repository from your hosted SVN workspace. SVN checkouts will bring over the latest revision of the repository you want to work with. If you’ve just created the repo, no commits exist yet and no revisions will be found, so you’ll be on the first version of that repo.

Perform Changes

Once you have the SVN repository checked out, you can start making changes. Choose from your favorite developer tools and editors to perform changes to your repository that reflect your product development goals.

Submit changed files to your repositories and track those commits using your SVN host’s client UI.

Review Changes

After submitting various files, it’s important to review the changes you’ve made. SVN hosts will take file updates committed to an individual repo and list them as revisions. If you’ve added five versions of the same file to a repo, you can navigate that complete history from versions 1,2,3,4, or 5. SVN hosting tools make that review process simple and easy to execute.

Did a developer submit a glaring error that needs to be rolled back to a previous version? Well, then…

Revert Changes

SVN provides a command that can revert your file changes to a previous, healthy version. Simply use “svn revert” in your command line to bring your file back to the state it was before your edits. And the command isn't limited to individual files. You can revert entire directories or repositories in a single command.

Resolve Conflicts

Conflicts occur when two adjacent developers are making changes to the same file — this is a particularly common occurrence in large organizations, where repos and files permeate the enterprise.

Conflicts are a part of a normal development workflow and are pretty straightforward. Basically, you have three options. 1) “My colleague’s changes are best. Forget mine, we’ll use his.” 2) “Man, I’m smart. My changes are best. Sorry, colleague, we’re scrapping yours.” Or 3) “Our changes work best together. What a great team we are. Time to merge and commit.”

Using a simple merge workflow, users can mark merge changes as resolved and commit the new-and-improved file back into the project environment.

Rinse. Repeat

And that's it! You'll use some variation of this approach to fuel your development process over and over again, across 1000s and 1000s of files, across your SVN project repos. The effectiveness of your development approach, however, will hinge on whether your SVN hosting platform is the best fit within your organization.

Your Version Control and Code Hosting Partner

Now that you know more about SVN and, perhaps, are thinking an SVN-hosted solution is right for your development environment, why not team up with an industry-proven partner who can help you get the most productivity out of your environment?

Helix TeamHub is the only code hosting and developer collaboration tool that supports SVN repository management, in addition to Git, Mercurial, and Maven. Perforce provides hosting options for teams of all sizes, both on-premises or in the cloud.

What Helix TeamHub offers out-of-the-box:

  • Unlimited private SVN repositories 
  • Daily backups, highly reliable, +99.99% uptime
  • Integrations and webhooks for industry-preferred tools, such as Jenkins, JIRA, and Slack
  • Built-in code review, issue tracking, and delegated access management 

If you’re considering a solution to help manage and host your SVN repositories, consider Helix TeamHub, featuring pricing options and license tiers that fit every organization, large or small.

Helix TeamHub is free to get started with hosting for up to 5 users and 1GB of data.

Off

What's New in Helix TeamHub 2017.1

$
0
0
What's New in Helix TeamHub 2017.1
Thumbnail
cberres September 13, 2017
Repository Management
Version Control
Developer Collaboration

As we welcome Helix TeamHub 2017.1 into the Perforce family, I want to give you a full rundown of the new features longtime users and initiates alike can expect to be able to leverage in the latest release.

Please note that this release is currently available only in the cloud. We'll release the on-premises packages at the beginning of October.

Default Reviewers for Code Reviews

With the release of 2017.1, you have the ability to set default reviewers on Git and Mercurial repositories.

Thumbnail

Here are a few pointers for you to start managing default reviewers for your repositories:

First, head to the Code Review tab under Repository Settings, and select the team members who will be added automatically to each review created for the repository in question.

Default reviewers allow you to streamline the review workflow by automatically assigning the correct team members to review changes. Designated reviewers see code reviews they've been assigned from the dashboard, so they know which reviews to work on immediately after signing in to Helix TeamHub.

Thumbnail

Filter Code Reviews by Assignee

In addition to default reviews, Helix TeamHub 2017.1 delivers another important improvement to your code reviews. When you want to see your review — or anyone else’s, you can conveniently search for code reviews without opening each review to see who was assigned as a reviewer.

Thumbnail

Smaller Enhancements

Last but not least, we've made a few smaller enhancements to Helix TeamHub 2017.1 for a sleeker experience within the UI:

  • Applying notification settings is a breeze with clearer instructions
  • Finding your projects is faster and easier with a combined view of all repositories.
  • Wikis without a Home.md file no longer cause errors.
  • Going back to all issue listings no longer automatically selects the default milestone, but remembers your selection.

Beauty from the Inside Out

We wanted Helix TeamHub 2017.1 to look as great on the inside as it does on the outside, which is why the latest release has made a few important upgrades on its backend. Ruby, MongoDB, Nginx and Redis have been upgraded to the latest versions. In addition, Helix TeamHub 2017.1 will no longer require a custom-built OpenSSH package. Naturally, these backend component upgrades provide users with the most robust security and performance possible. But they’ll also play a vital role as we continue to roll out more features for you in the future.

Try For Free or head to the What's New page to download the latest version.

 

Off

What's New in Surround 2017.2

$
0
0
What's New in Surround 2017.2
Thumbnail
dborcherding September 25, 2017
Repository Management

Surround SCM 2017.2 is now available! Check out the enhancements we made in this release.

New Logo

Like the rest of the Helix family, Surround SCM now has a shiny new logo!

Thumbnail

Search in Windows and Dialog Boxes

Don’t waste time hunting for what you need in windows or dialog boxes. You can now search quickly find information in these windows and dialog boxes:

  • Cloaked Repositories
  • Code Reviews
  • Email Notifications
  • Labels
  • Reports
  • Security Groups
  • Shadow Folders
  • Shelves
  • Triggers
  • Users
  • Working Directories
Thumbnail

View History and Go Directly to Files from Code Reviews

As you’re reviewing a file in a code review, you may want to know what changed between versions and why. You can now access historical versions on the History tab while reviewing a file. You can also perform actions on historical versions from this tab. Learn more.

Thumbnail

If you want to quickly navigate to a file included in a review, right-click it and choose Go to File. The Source View window opens and the file is selected.

Thumbnail

See More Information in Security Group Reports

When you create security group reports, you can now include users in the group and server security permissions to see more details about each group. Learn more.

Thumbnail

Refreshed Surround SCM Web Interface

Surround SCM Web got a few nips and tucks for a more modern look in this release.

Thumbnail

And More!

This release also includes other enhancements, such as:

  • Merge Microsoft Word and other binary files when duplicating changes across branches.
  • Additional events are now available when adding event restrictions for filters, advanced find, and reports:
    • Change custom field
    • Change state
    • Promote from
    • Promote from with merge
    • Rollback file
  • Log in using single sign-on from the API with the new sscm_connect_ext call.

Ready to check out Surround SCM 2017.2? If you have a current support and maintenance plan, upgrades are free. If you’re not already using Surround SCM, contact us to try it out.

On

The 1 Swedish Word That Best Describes the Perforce Acquisition of Hansoft

$
0
0
The 1 Swedish Word That Best Describes the Perforce Acquisition of Hansoft
Thumbnail
mreisenauer September 28, 2017
Partner
Agile

Exciting news!

After 12 years of helping Agile enterprises everywhere ship products faster, we are excited to tell the world that Hansoft is now a Perforce company!

It was the best decision we could make — for both the software and its fans.

Whether they’re game studios or Internet of Things pioneers from places like Germany or Japan or China, it is our customers that help make Hansoft the tool it is. They are the ones who have incorporated Hansoft into their workflows, and helped us improve and adapt the software to new ways of working.

Now, as part of the Perforce team, we will be able to serve our customers better than ever.

While you can expect continued improvement to Hansoft, you can also expect us to stay true to our core mission of empowering both development and executive teams to work together better.

Or, in a single Swedish word (because Hansoft originated in Sweden), lagarbete.

Whatever language you speak, it is an important word these days.

Because, as technology changes our lives and we keep looking for the best, fastest, easiest solution, that one word is the real goal.

What does it mean?

Translated into English, lagarbete means this: teamwork.

As we look to the future of Agile enterprises, we need to keep that word in the foreground, while asking, “What can we try next to improve together most?”

With that, all of us at Hansoft are excited to be part of the Perforce Software development platform! As the Chief Product Officer at Hansoft, I’ll be working closely with Tim Russell, CPO, and Janet Dryer, CEO of Perforce, as we continue to support lagarbete everywhere!

If you haven’t yet, please visit our updated website and try a free version of Hansoft.

Tack!

Rikard Nilsson
Chief Product Officer
Hansoft

On

The 4 Hidden Dangers of Jira Add-ons That Can Stop Your Project in Its Tracks

$
0
0
The 4 Hidden Dangers of Jira Add-ons That Can Stop Your Project in Its Tracks
Thumbnail
dborcherding September 29, 2017
Application Lifecycle Management

Software development teams use Atlassian’s Jira because it works well for bug tracking.

It’s also easy to use, and popular enough that new team members are likely to be familiar with it.

But Jira only does one thing: issue management.

What if you need requirements management? Or test case management? You could purchase third-party add-ons in order to get this coverage, but that approach involves a few hidden dangers.

We touched on this in our earlier blog post, What to Do When Jira Can't Handle Your Workflow Anymore.

In theory, add-ons from the Jira marketplace seem like a good way to add functionality as you need it. In practice, however, this approach can quickly become painful.

If you’re tempted to go this route, here are four factors to consider.

1. Exponential License Fees

The first thing to know is that each add-on typically has an additional license fee per user. These fees can quickly double your initial investment into Jira.

Even with only two add-ons (one for requirements management and one for managing test cases, for example), you’re now paying for three licenses for each user.

For a team of 20 people, that’s 60 licenses. Now imagine what an enterprise-level development team would need.

And that’s just for two add-ons; most teams need four or five to get the features and functionality they need to manage their development lifecycle.

2. Additional Complexity

On top of fees, each add-on complicates your Jira instance a little more — until it becomes challenging to make it all work.

Many companies resort to hiring an outside consultant when the complexity becomes too time-consuming or frustrating to do on their own.

It’s such a common problem, several consultancies have built their businesses around the complexity of configuring Jira and the selection and management of add-ons.

And they aren’t cheap.

Now, you’re not only paying for additional licenses per user, you’re also paying a consultant to make it all work — and you’ll be paying them again when you have to buy another add-on.

3. Lack of Support

What happens when something breaks or goes wrong?

Do you call Atlassian’s support team? The consultant you hired? The company that made the app (assuming you know which app is causing the problem)?

This is another hidden danger of add-ons: No single vendor takes responsibility for the overall quality and support of your solution.

To make matters worse, many of the add-ons are from small vendors, who may not keep them up to date or even support them in the future.

4. Failed Upgrades

Out-of-date add-ons can, in turn, cause failed upgrades and scenarios that stop your team’s work.

That makes every add-on a potential point of failure. The more add-ons, the more problems you encounter after a Jira upgrade.

Not only that, but you’re beholden to the lowest common denominator; if just one app doesn’t work with the latest Jira upgrade, you’re stuck.

Want to Avoid These Jira Pains?

Download our white paper, “Beyond Bug Tracking with Jira.”

You’ll learn how to stretch your coverage without the pain of add-ons using Helix ALM. Helix ALM integrates with Jira for end-to-end coverage of even the most complex product development workflow.

On

P4VS Gives Microsoft Visual Studio 2017 Helix Power and Scale!

$
0
0
P4VS Gives Microsoft Visual Studio 2017 Helix Power and Scale!cgehman@perforce.com October 2, 2017
Integration

Microsoft Visual Studio 2017 represents a great upgrade with many new features for this popular development environment. Microsoft has taken a fascinating turn with their support of open source projects, and even cross-platform development. With VS 2017, there are many new technologies such as the cross-platform .NET Core and robust built-in language support such as TypeScript; getting into functional programming with F# for big data and machine learning applications. Not to mention continuing to ratchet up integration and ease of use for developers deploying to the Azure cloud. Microsoft really makes it easy to adopt new tech these days, too. Just one great example of the cross-platform and cloud support is the appearance of “Add Docker Support” on the Project Menu in VS 2017— you can instantly spin up the dev image of your application in a Docker container for easy iterative debugging!

Windows developers are a key part of the Perforce community, and one important area of our efforts on the platform is new enhancements to the P4VS Plugin for Visual Studio, which has been downloaded almost 250,000 times! While the plugin has been around since 2012, the recent update is significant and provides access to more Perforce Helix features. It uses Microsoft’s new SCC Integration to let developers access Helix Core’s powerful capabilities right from within the Visual Studio interface.

If you are a Visual Studio user, but you’ve been using the command line interface or P4V Visual Client, now is a great time to try out the P4VS Plugin. P4VS is a fully compliant Visual Studio Integration Package, designed for full compatibility and ease of use. Perforce is a certified member of the Microsoft Visual Studio Industry Partner (VSIP) program. Built on P4API.NET for speed and stability, P4VS is available on 32-bit and is compatible with 64-bit Windows platforms.

“Start Me Up”

The enhancements start with the installation of the P4VS Plugin. When you install the plugin, if you already have a Helix workspace root, it becomes your default workspace for VS projects. Then, when you start a new project, you can use P4VS in the bottom right-hand corner of the windows so your new work can be included in version control from inception.

Thumbnail

Working Online

You’ve always had the ability to work online and offline with Perforce, but now the Connection Toolbar at the top of the UI lets you see your connection status, and updates it dynamically. It lets you see and connect to a list of the recent Helix servers you’ve used. It functions similarly to the Connections button in Perforce’s own Helix Visual Client (P4V). You can choose from existing selections or type in the information for a new server. This is a great time saver if you connect to multiple servers.

Thumbnail

Working Offline
It’s now easier than ever before to manage your work when working offline. The menu selection and icon on the toolbar allows you to reconcile offline work on your terms. The reconcile is based on the solution root and everything under it. When you choose reconcile, it shows the local files that are not in the Perforce depot. It shows any modifications, moves, adds, and deletes. If there is a really long list of files for adds, deletes, and mods, we tell you how many there are. There is also an “advanced” selection that brings up P4V Diff option.

Thumbnail

P4VS is integrated with the Visual Studio Solution Explorer, providing access to functionality and status information. Badges on file icons indicate Perforce status. When you right click a file in Solution Explorer, the appropriate P4VS actions enabled for that file are available for selection in the context menu.

For More Information

Read about Microsoft Integration here,

Get the P4VS Plug-in here,

Off

4 Ways Requirements Management Can Improve Teamwork

$
0
0
4 Ways Requirements Management Can Improve Teamwork
Thumbnail
dborcherding October 16, 2017
Requirements Management

The success of any development process hinges on teamwork.

And teamwork hinges on communication.

But, if you can’t coordinate communication between project management, development, sales, customers, and other stakeholders, the team can get out of sync quickly.

To keep them all swimming in the same direction, they need frequent updates on the status of requirements and where their teammates are with their assigned tasks.

A good place to start?

Your requirements management (RM) tool.

When your RM tool includes the right collaboration capabilities, you:

  • Solve complex problems faster.
  • Build the right product the first time.
  • Consistently complete projects on time and within budget.

A full-featured requirements management solution offers many benefits, but there are four ways RM tools can improve communication and teamwork.

1. Centralized Requirements Allow Real-Time Sharing

An RM tools allows you to centralize requirements and break free of the single-owner constraints of documents and spreadsheets. They give you the flexibility to organize (and reorganize) the hierarchy of requirements just like a document, but everyone has constant access and multiple authors can work on requirements at the same time.

2. Requirements Can Be Reviewed by Everyone

The most expensive bugs are caused by bad requirements. With an RM tool, everyone can participate in reviews, allowing more voices to be heard. And reviews take less time, because you only have to review what’s changed. This makes it easy for an iterative, consensus driven approach to requirements. By letting more of your team participate in requirement reviews, you end up with better requirements up front and fewer expensive bugs in the end.

3. Automation Aligns the Team

Another advantage RM tools have over documents and spreadsheets is that RM tools can automatically notify team members when they have been assigned a requirement to review or approve. Product owners and stakeholders can also be notified when approved requirements have been modified, helping make sure the team is implementing the correct features.

4. Track Email Conversations With the Associated Requirement

RM tools also eliminate the danger of the team missing a requirement change because they didn’t see an email. Email conversations are stored and tracked with the requirement from which they were sent — the original email and all the replies. You can see at a glance when a requirement change was requested and reasons for the change. You can also track when the requirement was approved, and by whom.

Ditch the Documents

If you’re trying to manage requirements with documents and spreadsheets, you simply don’t have the right tools to help your team communicate effectively. A real-time, always-accessible tool will help you share requirements and track all changes and updates.

Unlike documents and spreadsheets, full-featured RM tools allow you to collaborate with multiple stakeholders — simultaneously capturing requirements, performing reviews, knowing what's approved, and, most importantly, being aware of changes.

Teamwork Isn’t the Only Thing. Read 10 Signs You’ve Outgrown Your Requirements Management Tool

Improving teamwork and communication aren’t the only reasons to move away from documents and spreadsheets and adopt a purpose-built RM tool. For a complete look at the benefits of requirements management solutions, read our white paper, 10 Signs You’ve Outgrown Your Requirements Management Tool.

 

On

Bolster Performance With Perforce Fall VCS Releases

$
0
0
Bolster Performance With Perforce Fall VCS Releases
Thumbnail
cgehman@perforce.com October 31, 2017
Version Control

Between September 28 and October 26, we released important upgrades to Helix Core, Helix Visual Client (P4V), Helix Swarm, and major plugins like P4Eclipse, Helix Plugin for Visual Studio (P4VS), and our Microsoft .NET API.

These features were designed to help teams:

  • Save time
  • Boost server performance
  • Streamline development workflows

Keep reading to learn more about the upgrades and how to take advantage of them.

Helix Core: More Upgrades to Server Performance

The 2017.2 release of Helix Core continues to raise the bar by adding support for WAN acceleration technologies.

Although Helix Core’s TCP/IP tuning and parallel sync capabilities already provide enterprise-class performance, many global companies in film or game development already use WAN acceleration technologies. With this release, they’ll be able to boost performance by using the technology alongside Perforce Federated Services to ensure remote sites are in sync with central servers at all times.

How WAN Acceleration and Perforce Federated Services Work

Many of our customers with geographically distributed teams already use Perforce Federated Services thanks to its significant performance advantages compared to other solutions, even without WAN acceleration.

An edge server contains a replicated copy of the commit-server data and a unique, local copy of workspace and work-in-progress information. You can connect multiple edge servers to a commit server.

From a user's perspective, most operations are handled by an edge server until the point of submit. As with a forwarding replica, read operations, such as obtaining a list of files or viewing file history, are local. With an edge server, syncing, checking out, merging, resolving, and reverting files are also local operations. This greatly improves performance.

The WAN acceleration technology further improves the speed and efficiency of the commit-edge architecture, moving assets regardless of file size, transfer distance, or network conditions. This dramatically improves developer productivity and build performance when large, ever-changing files are involved.

Use cases suggest that organizations depending on transferring terabytes of digital assets across multiple locations (to maintain aggressive schedules and contain costs) gain the most from this support, which provides up to 14x faster operations.

If you work with large files, you know what a game changer this could be.

Boost Performance and Stability Even Without WAN Acceleration

In line with the performance theme of 2017.2, we’ve also increased parallel sync operations by improving resilience under load to support a greater number of simultaneous requests. Parallel sync is one of several techniques Perforce employs to make Helix Core the fastest VCS server on the planet. This upgrade is independent from our support for WAN acceleration technologies.

Helix Visual Client: Developer Desktops Get Faster

The Helix Visual Client (P4V) 2017.3, a popular adjunct to Helix Core, is available for Linux, Macintosh, and Windows operating systems. Like Helix Core, this release focuses on performance. With the aforementioned enhancements for multiple parallel sync operations, we also reinforced the support in P4V so the client and the server work together to boost performance and stability.

Another great feature is the ability to easily restore client workspaces to their original state from within P4V. With a single click from the UI, you can level set and restore your workspace to its original state so that it matches the depot. Users can leverage this feature to rid their workspace of deleted files, ones that are not under source control, and refresh those that have been modified.

Helix Swarm 2017.3: Communication Enhancements for Code Reviews and Collaboration

As with past releases this year, the enhancements in Helix Swarm 2017.3 focus on streamlining communication on development projects, notably with whom — and how — reviews are shared.

For example, project owners, review authors, and commenters can save time during code reviews by setting up a group of users who can approve, vote, or comment on particular reviews. You can even add subgroups to organize individuals who have the same set of permissions. Additionally, enhancements improve the productivity of individual group reviewers by facilitating find and filter for group reviews.

Another great, new feature is the ability to add an email mailing list to review groups so group reviewers always know about key changes and review requests. This is especially useful for those who don’t keep Swarm open all the time.  It lets you use your email client’s ability to filter, group, and prioritize notifications sent by Swarm.

Finally, Swarm now allows flexibility in how approvals/disapprovals — sometimes referred to as “votes” — may be counted. Project owners can choose whether an action from one user will represent the action of the whole group, or instead require all users within the group to take individual action and vote for approval or disapproval.

Helix Plugin for Visual Studio: New Functionality for Microsoft Developers

The Helix Plugin for Visual Studio (P4VS) 2017.2 brings developers the enterprise-class version control features from Helix Core they love into their workflow without ever leaving the Visual Studio IDE.

Defined workspaces in Helix Core will now automatically be set as your default workspace for Visual Studio projects. You can choose from a drop-down list of Perforce servers previously accessed. When you return online after working locally for several hours, 2017.2 makes it easier to reconcile your work, showing moves, adds, and deletes to save you considerable time.

P4VS 2017.2 also features better toolbar integration. For example, the status bar now displays the number of active pending changelists and will bring up that tool window when clicked. That means you can resume work in progress more quickly. It also respects Visual Studio’s list of files that should be under source control, so you no longer have to manually eliminate files that accidentally get added to your workspace. And, when you start new projects in Visual Studio, we automatically offer you the option to incorporate them in Helix Core. Additionally, the plugin offers improved integration with Helix Swarm, so you can get your work reviewed, approved, and into production even faster.

P4API.NET 2017.2

If scripting in the robust P4 command line isn’t enough for your sophisticated custom tooling project, P4API.NET is a fully supported Helix API for the .NET environment. P4VS is built using P4API.NET, which provides speed and stability when working with large projects. Documentation and code samples are available for use of the API with C#, C++, and Visual Basic. P4API.NET 2017.2 was released to support P4VS 2017.2.

 

P4Eclipse: New Functionality for Eclipse Users

The big feature for P4Eclipse 2017.1 is support for Eclipse Neon 4.6, but pre-commit Swarm reviews are also supported now, meaning you can do more from within your IDE. The Eclipse integration enables you to create a new review or update an existing review from P4Eclipse pending changelists and submitted changelists views. And you can now update a review by choosing the pending changelist from the P4Eclipse pending changelists view, right-clicking, and selecting “Update Swarm Review”.

 

Don’t See an Upgrade You Want? Check Back

At Perforce, we’re always listening to our customers. We still have a couple of months to go this year, so if the upgrade or enhancement you desire isn’t mentioned here, check back soon. It may just be on your gift list over the holidays.

You can dive deeper during the What’s New webinar on November 9 with Perforce Senior Solutions Engineer Jackie Garcia. We’ll discuss the latest features, give you tips on how you can start using them, and answer all your questions.

 

 

 

 

 

 

 

 

Off

What’s New in Helix ALM 2017.2

$
0
0
What’s New in Helix ALM 2017.2
Thumbnail
akearns November 3, 2017
Application Lifecycle Management

Helix ALM 2017.2 (formerly TestTrack) is now available, with some exciting new features and enhancements that make it easier to share information, manage users and customers, and retrieve key information.

Save and Share Item List Tabs

Tabs make it simple to switch between multiple instances of an item list with different configurations. For example, you may have three tabs for the Issues list with different columns, filtering, and sorting. You can easily switch between those tabs to see what you need. You can now save tabs to use them again later. Learn more.

Thumbnail
Tabs make it simple to switch between multiple instances of an item list with different configurations.

You can also share saved tabs with other team members. For example, a team lead may want to share a tab to help her team see issues that need to be fixed for the current release, including each issue’s priority, currently assigned user, and links to related user stories. Learn more. 

Thumbnail
Helix ALM 2017.2 lets you share saved tabs with other team members.

Tabs are a great resource to help with onboarding new team members. You can configure a tab to display helpful information, save it, and then set it as a default tab to show the first time new users in a security group log in. Learn more.

If you used views in earlier Helix ALM or TestTrack versions, they are converted to tabs when you upgrade.

Manage Users and Customers in Helix ALM Web

Helix ALM Web now has an Administration area where you manage users and customers.

Thumbnail
You can now manage users and customers in Helix ALM Web's new Administration area.

You can:

  • Add users and customers to projects
  • View, edit, and delete existing users and customers
  • Retrieve global users and customers from the license server
  • Promote local users and customers to global
  • Run reports based on users and customers

Learn more.

Check out the new REST API

Developers definitely should take a peek at the new Helix ALM REST API. The API makes it easy to extend Helix ALM functionality to integrate with other applications. Learn more.

Developers can currently use the REST API to retrieve information about issues. We’re working on adding more functionality in upcoming Helix ALM releases.

We need your feedback to help us build a powerful, user-friendly API. Contact Perforce Support if you have suggestions or questions.

Upgrade to Helix ALM 2017.2

Ready to upgrade Helix ALM 2017.2? If you have a current support and maintenance plan, upgrades are free. If you’re not already using Helix ALM, try it for free.

On

Cadence Design Systems Offers 4 Tips for a Successful Migration From ClearCase

$
0
0
Cadence Design Systems Offers 4 Tips for a Successful Migration From ClearCase
Thumbnail
cberres November 7, 2017
Migration
Customers

They had two system admins. Both spent half their time supporting the organization’s ClearCase repositories. Upgrades, backups, and file transfers were painfully slow. For everyone. So slow they found themselves scheduling days of down time just to complete any one of the above operations.

But with 20 years of history — a total of 276GB of data — in their ClearCase repositories, Cadence Design Systems didn’t think replacing ClearCase would be difficult.

They thought it would be impossible.

Cindi Hunter, Director of Configuration Management at Cadence, wasn’t easily deterred. She had already led the multinational electronic design automation (EDA) software and engineering services company through one migration since she started in 1995. She knew the organization could simplify administration, boost performance, and increase scale with a new versioning solution… if they could ensure a successful migration.

That’s exactly what they did.

Last week, Cindi Hunter and Tom Tyler, Senior Consultant at Perforce, teamed up to provide ClearCase users with a realistic roadmap for a successful, large-scale migration. Below are highlights and key takeaways from their shared experience moving Cadence Design Systems off of ClearCase and onto Helix Core.

 

Define a Comprehensive Plan From the Start

Before you overhaul your version control system, it’s important to define requirements for the entire project. This includes migration strategy, hardware deployment, R&D and build flows, training, and the go-live phase. Meticulous planning, which included thinking through potential challenges, turned Cindi’s team into Helix Core experts long before their go-live date. They were able to get engineers operating at full speed within two weeks of launch.

 

Keep It Simple for Front-End Users

If you don’t want to alienate your software engineers, you need to keep things simple for them as you transition from your old environment to your new one. It’s important to remember that you are not just changing your versioning system behind the scenes. The architectural decisions you make can impact the workflow of your software engineers, either accelerating adoption or causing frustration.

Cadence kept this top of mind during the planning process.

The result?

Engineers didn’t need to adapt to a new workflow. Once they learned a new set of commands, they were ready to use their new tool.

 

Provide Ample Time for Training

Perhaps most importantly, Cindi recommends providing ample time for training, citing it as a key element to include in your initial requirements document.

Cadence allocated two full weeks for training, offering users a two-part workshop. The first session provided new users with a general overview of their new solution: tools, commands, flows, and processes. The second, a chance to play in the test environment. Sessions were recorded and cheat sheets distributed. Cindi estimates that the training phase not only increased adoption, but also saved everyone resources and frustration over time.

 

Leverage the Power of Perforce

Today, Helix Core has enabled Cadence Design Systems to simplify their administration requirements. One system administrator spends just a quarter of their time supporting 400 users. Helix Core makes it easy to upgrade, easy to back up, and easy to move files around.

There’s nothing complicated about Cindi’s final advice: “Keep your environment simple, and use the technology and the power that Perforce has to offer. It will provide a robust SCM solution for you and anyone else that decides to use it.”

Missed the live presentation? Watch the on-demand webinar.

 

 

Off

Helix TeamHub 2017.2 Helps Global Teams Scale Git for Enterprise DevOps

$
0
0
Helix TeamHub 2017.2 Helps Global Teams Scale Git for Enterprise DevOps
Thumbnail
Anonymous (not verified) November 10, 2017
Version Control

We are very excited to announce the release of Helix TeamHub 2017.2! Helix TeamHub 2017.2 brings many more visibility to the end user as well as improvements under the hood. Let’s take a look at some of the latest enhancements. For a more detailed overview of the changes, please review the release notes.  

Expandable Diffs

One of the most awaited features in Helix TeamHub 2017.2 is the ability to expand the context around diffs. Expandable diffs are a time saver because you no longer need to leave the Helix TeamHub to do a complete code review.

When there’s a change that touches a function or a class, Helix TeamHub used to show only a certain number of lines before and after the changes in the diffs. However, for changes that touch a larger subset of the codebase, you typically need to see the whole class definition or a function in order to review the changes. This is now possible with expandable diffs.

Thumbnail
Clicking the up or down caret expands your view of the code above or below, in case you need more context to review the change.

You can expand the diffs indefinitely — starting from the beginning of the file until the end. Expanding happens with the buttons before and after the changes.

Tweak Diff Limits (Enterprise Only)

When you are working with extremely large changesets, Helix TeamHub Enterprise now lets you configure diff limits based on several properties. The default limits have been set to offer a smooth user experience but that naturally comes at the cost of not being able to show large changesets. In Helix TeamHub Enterprise 2017.2, you can configure fine-grained limits to hide the diffs.

The configuration of the diff limits happens through the “hth.json” configuration file. You can configure limits various ways:

  • Maximum number of files
  • Maximum number of lines
  • Maximum size of a file in a diff
  • Timeout for generating the diff

Tuning these limits might have performance implications, so use caution when tweaking.

Code Review Listing Improvements

Helix TeamHub 2017.2 has revised parts of the code review listing to show more meaningful and actionable information to the end user. We received great feedback from users on the code review listing in Helix TeamHub and incorporated that feedback into the latest release.

The enhanced code review listing shows the number of approvals, the build status, and the reviewers assigned to the code review. With the extra information, you can quickly distinguish what state the review is in.

Thumbnail
It’s easier than ever for team members to know when they need to act, resulting in better productivity.

For example, if there are no reviewers, it means the review hasn’t started. If, on the other hand, the approval count has been met, you can assume that the review can be safely merged.

Also, the creator of the review is no longer shown as an avatar but rather their name is shown under the name of the review.

LargeFiles Support for Mercurial Repositories

Helix TeamHub supports Git, Subversion (SVN), and Mercurial (HG) version control systems. Storing large binary files to Mercurial and Git repositories is painful, but both Git and Mercurial have alternative ways to store large binary files to Git repositories. In Mercurial, one of the native alternatives is LargeFiles.

Although LargeFiles is a native extension that is delivered with Mercurial, it needs to be enabled separately. Helix TeamHub 2017.2 now allows you to enable the Mercurial Largefiles extension through the repository settings. Once LargeFiles extension is enabled for a particular repository, you are unable to disable it.

Thumbnail
The Largefiles extension setting is found within the Repository Settings, under the Maintenance menu.

Changes in Supported Operating Systems (Enterprise Only)

Helix TeamHub 2017.2 adds support for Ubuntu 16.04 LTS and Debian 8 and 9. Helix TeamHub 2017.2 no longer supports Ubuntu 12 or Debian 6 or 7.

Smaller Enhancements and Bug Fixes

In addition to bug fixes, which can be found in the Release Notes, Helix TeamHub 2017.2 includes a few minor enhancements.

You can now configure the webhook content type via the dropdown menu instead of a text field. Also, the layout selection for listing issues is now preserved in browser localStorage. Lastly, you can create relative links to the Helix TeamHub instance in question using Markdown syntax.

Try Helix TeamHub for Free

Try the enhancements yourself by signing up for the free cloud version. If you want to see the new enhancements up close, register for the next live demo.

On

Culture vs. Tools: Which Is the Key to DevOps at Scale?

$
0
0
Culture vs. Tools: Which Is the Key to DevOps at Scale?
Thumbnail
jbartlett@perf… November 13, 2017
DevOps

Some will tell you that DevOps is 100 percent about transforming the culture of your organization and others extol the virtues of fantastic new tools that make DevOps possible. 

Which of these is true? Of course, culture is important; if you don’t break down silos and create communication between teams, you will never achieve the benefits that DevOps promises.

But, the same can be said about the tools; without adopting new tools, you won’t be able to achieve automation, continuous integration, and continuous delivery.

According to recent IT industry research, 89 percent of consumers who experience poor service with a brand will leave for a competitor. This is one example of how today’s enterprise is under pressure to digitally transform both customer-facing applications and the foundation on which they are built.

Software Is Driving Product Development, New Revenue Streams, and Customer Reach

In order for companies to elevate their performance and achieve operational goals, software technologies and solutions must take center stage, said Creative Intellect Consultant Analyst Bola Rotibi.“Ask organizations today about their goals for progression and most will outline a journey in which software technologies and solutions drive product and process innovations and empower their workforce,” said Rotibi. “Success in the digital age is predicated on an ability to deliver software at scale.”

For your company to achieve such performance, in both development and operations, it must incorporate changes in leadership, tools, automation, and culture.

DevOps has been recognized by many as the path to solving these problems, but for an organization just starting out, it seems overwhelming. The processes that make up the five stages of continuous – development, testing, integration, delivery, and deployment – have been proven to enable greater automation. But in a large organization, you’ve got to achieve a balance between a best-of-breed approach and a platform solution that offers consistency, management governance, and orchestration.

Expert Advice for Scaling DevOps to the Enterprise

That’s why I was so excited to read CIC’s analyst brief, "Scaling DevOps in the Enterprise: A 10-Point Primer", by Ian Murphy and Bola Rotibi.

It’s a quick read and encapsulates the overarching topics for the enterprise to consider, giving equal time to cultural and technological issues. From dealing with communication challenges to continuously analyzing your existing tools and skillsets in which you’ve invested, CIC takes the perplexity out of scaling DevOps.

The analyst brief also contains a concise, 10-point primer that you can use as you embark on your journey to create your enterprise software factory.

For expert advice on achieving DevOps at scale, read "Scaling DevOps in the Enterprise: A 10-Point Primer".

Off

Benchmarking Multi-Repo Git Environments for Performance

$
0
0
Benchmarking Multi-Repo Git Environments for Performance
Thumbnail
cberres November 15, 2017
Git at Scale
Thumbnail

Performance is a key factor when deciding whether you should adopt a new tool.

To help you understand how different versioning operations can scale, we measured performance in a large, multi-repository Git environment on a standalone Git server and the same environment in Helix TeamHub Enterprise, the new Git code hosting and collaboration solution by Perforce.

The Test: Concurrent Git Shallow Clones

We tested 10 concurrent shallow clones of Git repositories on a Git server and compared that to 10 concurrent shallow clones of Git repositories in TeamHub Enterprise. Instead of using Git commands, the clones from Helix TeamHub Enterprise used the Perforce native command called P4sync. Both tests fetched data from 1,011 Git repositories.

The Results: Helix TeamHub Enterprise Fetches 3x Faster

 

 

Helix TeamHub Enterprise

Standalone Git Servers

Shallow Clone

1hr 20 mins

4hrs 2 mins

 

The results reveal thatHelix TeamHub Enterprise improves fetching performance in large multi-repo Git projects by more than three times. For large teams with these environments — working on thousands of changes a day — this represents an unparalleled increase in productivity and scalable CI without additional tooling or complexity. Helix TeamHub Enterprise also:

  • Eliminates the need to use disparate, third-party tools to connect siloed, multi-repo Git projects.
  • Provides build farm support with automated global replication to scale continuous integration (CI) and reduce load on the master server.
  • Getsusers the data they need to perform builds without the full history of Git data.

Over the next year, Perforce will continue to drive significant advancements for teams using Git, enabling them to speed up builds and CI, as well as development and code review on large projects spanning multiple repositories.

 

Off
Viewing all 1361 articles
Browse latest View live