Quantcast
Channel: Azure DevOps Blog
Viewing all 607 articles
Browse latest View live

Announcing General Availability for Code Search

$
0
0

Today, we are excited to announce the general availability of Code Search in Visual Studio Team Services. Code Search is available for Team Foundation Server “15” as well.

What’s more? Code Search can be added to any Team Services account for free. By installing this extension through the Visual Studio Marketplace, any user with access to source code can take advantage of Code Search.

With this release Code Search now understands Java. Not only can you perform full text matching, for C#, C, C++, VB.NET and Java it understands the structure of your code and allows you to search for specific context, like class definitions, comments, properties, etc across all your TFVC and Git Projects. We’ll be adding support for additional languages in the future.

Enabling Code Search for your VSTS account

Code Search is available as a free extension on Visual Studio Team Services Marketplace. Click the install button on the extension description page and follow instructions displayed, to enable the feature for your account.

Note that you need to be an account admin to install the feature. If you are not, then the install experience will allow you to request your account admin to install the feature.

Installation of the extension triggers indexing of the source code in your account. Depending on the size of the code base, you may have to wait for some time for the index to get built.

You can start searching for code using the search box on the top right corner or use the context menu from the code explorer.

SearchBox

Enabling Code Search on Team Foundation Server “15”

Code Search is available for Team Foundation Server starting with TFS “15”. You can configure Code Search as part of the TFS Server configuration. For more details see Administer Search.

Note that you need to be a TFS admin to configure Search as part of TFS.

Installation of the Code Search extension triggers indexing of the source code in a collection. Installation can be initiated for all collections by a TFS admin during configuration of the Search feature or post configuration by Project Collection admins for their respective Collections. The latter can be achieved by navigating to the Marketplace from within your TFS instance. Depending on the size of the code base, you may have to wait for some time for the index to get built.

InstallCSOnTFS

Search across one or more projects

Code Search enables you to search across all projects (TFVC & Git), so you can focus on the results that matter most to you.

MultipleProjects

Semantic ranking

Ranking ensures you get what you are looking for in the first few results. Code Search uses code semantics as one of the many signals for ranking; this ensures that the matches are laid out in a relevant manner E.g. Files with a term appears as definition are ranked higher.

SemanticRanking

Rich filtering

Get that extra power from Code Search that lets you filter your results by file path, extension, repo, and project. You can also filter by code type, such as definition, comment, reference, and much more. And, by incorporating logical operators such as AND, OR, NOT, refine your query to get the results you want.

RichFiltering

Code collaboration

Share Code Search results with team members using the query URL. Use annotations to figure out who last changed a line of code.

CodeCollaboration

Rich integration with version control

The Code Search interface integrates with familiar controls in the Code Hub, giving you the ability to look up History, compare what’s changed since the last commit or changeset, and much more.

VersionControlIntegration

Refer to help documentation for more details.

Got feedback?

How can we make Code Search better for you? Here is how you can get in touch with us

 

Thanks,
Search team


Parallel Test Execution

$
0
0

An early post on Parallel Test Execution drew attention to its subtle semantics. Three considerations directly contributed to that (1) Reach (2) Composability (3) Non-disruptive roll out.

The Visual Studio Test Platform is open and extensible, with tests written using various test frameworks and run using a variety of adapters. To reduce on-boarding friction, the feature ought to work on existing test code. It especially needs to work on existing MSTest framework based test code – there is a huge corpus of such tests already written, and it would be unrealistic to expect users to go in and update their test code just to take advantage of the feature. The feature must acknowledge that adapters for frameworks like NUnit, and xUnit.net already enable parallel test execution. And importantly it must not break existing test runs. Accordingly, the feature took shape as follows:

  • Parallel Test Execution is available to all frameworks.
  • It is available from within the IDE, the command line (CLI), and in VSTS.
  • From within the IDE it is available from all launch points (Test Explorer, CodeLens, various “Run” commands, etc.).
  • The feature composes with test adapters/frameworks that already support parallel execution.
  • The feature has a low-friction on-boarding experience – one that requires no changes to existing test code.
  • The feature meets the test code where it is.
  • The feature is OFF by default – users have to explicitly opt-in.

The Solution at Visual Studio Update 1

Parallel test execution leverages the available cores on the machine, and is realized by launching test execution on each available core as a distinct process, and handing it a container worth of tests (assembly, DLL, or relevant artifact containing the tests to execute) to execute. The unit of isolation is a process. The unit of scheduling is a test container. Within each container, the tests will be executed as per the semantics of the test framework. If there are many such containers, then as processes finish executing the tests within a container, they are handed the next available container.

In effect a coarse-grained level of parallelism is supported. The test platform intentionally leaves fine-grained control over parallelism to the framework of choice – i.e. it composes over it. Both levels parallelism can co-exist.

The feature is OFF by default.

The .runsettings file is the artifact used to configure how the tests get run, and is common in the IDE, the CLI, and the VSTS workflows. The feature can be turned ON by authoring a .runsettings file with an entry for MaxCpuCount, and associating that with the test run.

processparallel
The value for MaxCpuCount has the following semantics:

  • ‘n’ (where 1 <= n <= number of cores): upto ‘n’ processes will be launched.
  • ‘n’ of any other value: The number of processes launched will be as many as the available cores on the machine.

Typically, a value of 0 indicates that up to all of the available free cores may be used.

Guidance for leveraging Parallel Test Execution

As mentioned earlier, not all test code already written might be done so in a manner that is parallel-safe. For pure unit tests, it should just work. For other kinds of tests, you will need to experiment a little to see if they are assuming exclusive use of global resources, and refactor/rearrange them appropriately. In general, the following iterative approach may be used to leverage the feature:

Partition tests in terms of a taxonomy as follows:
(1) “Pure Unit Tests” (can run in parallel)
(2) Functional Tests that can run in parallel with some modifications (e.g.: 2 functional tests trying to create/delete the same folder can be ‘fixed’ to remove the assumption that they have exclusive use to the folder).
(3) Functional Tests that cannot be modified to run in parallel (e.g.: 2 Coded UI tests doing mouse actions on the desktop, 2 functional tests writing to the Bluetooth/IR port, etc.)

Gradually evolve the partitioning as follows:
a) Run the tests in parallel, and see what tests fail and classify them into (2) or (3) above.
b) For tests in (2), fix them so that they can run in parallel.
c) For tests in (3), move them out into a separate test run (where parallel is OFF by default).

Limitations

Parallel Test Execution is not supported in the following cases:
(1) If the test run is configured using a .testsettings file.
(2) It is not supported for test code targeting the Phone, Store, UWP application platforms.

How the feature evolved at Visual Studio Update 2

We received feedback that enabling parallel test execution using .runsettings was not very discoverable. Also, it is likely that some tests might fail when run in parallel, and our guidance on how to leverage the feature required the user to be able to separate out such tests in an iterative manner – an exercise that would require switching between parallel and non-parallel execution multiple times during one or more sessions. Therefore, we evolved the feature along the following lines:

  • Make the feature more easily discoverable.
  • Make it easy to turn the feature ON/OFF.

Starting with Visual Studio 2015 Update 2, we added the following alternative ways to enable Parallel Test Execution in the IDE, CLI and VSTS:

IDE
Parallel Test Execution is surfaced as a button on the Test Explorer.

parallelInTE
This is an ON/OFF setting.
If ON, the value of MaxCpuCount is set to 0.
The value is persisted with the solution.
The value is merged with any .runsettings file (if one is associated) just before a run begins.
The only way to tweak the value of MaxCpuCount is by explicitly adding a .runsettings file.

VSTS
In VSTS, “Execute Tests in Parallel” is surfaced as a checkbox in the VSTest task.

parallelInCI2
If ON, the value of MaxCpuCount is set to 0.
The value is merged with any .runsettings file (if one is associated) just before a run begins.
The only way to tweak the value of MaxCpuCount is by explicitly adding a .runsettings file.

CLI
vstest.console.exe supports a /Parallel command line switch.
If the switch is thrown, the value of MaxCpuCount is set to 0.
The value is merged with any .runsettings file (if one is associated) just before a run begins.
The only way to tweak the value of MaxCpuCount is by explicitly adding a .runsettings file.

Merging with .runsettings

But what if you already had a .runsettings file with MaxCpuCount? Well, the following tables summarize how the .runsettings will be merged:

IDE
TEmerge

CLI
CLImerge

VSTS
VSTSmerge

Summary

Parallel Test Execution is one of the key features we have shipped as part of the “Efficient Execution” theme, and we hope that this post has shed some light on the considerations that shaped it.

We are eager to know more about your experience with using this feature. Your inputs have informed the evolution of this and other features in the Test Plattform, so please keep the feedback coming. – post them as comments on this post, or using Visual Studio’s “Report a Problem”/”Provide a Suggestion” feature, or using connect, or twitter.

Looking forward to hearing from you.

Team Services October Extensions Roundup – Rugged DevOps

$
0
0

This month the focus is on making your DevOps environment rugged. According to Puppet, teams leveraging DevOps are deploying 200x more frequently and leveraging 90% more OSS components. Many of these teams, however, have not integrated security into their processes. The teams who have, spend 50% less time fixing security issues later. With this roundup we’ll look at three extensions that add support for OSS security and license validation, as well as code scanning, to ‘shift left’ your security and assist you in spending less time to build more secure software.

WhiteSource

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource

If your project leverages OSS, then you need to consider using WhiteSource. This extension adds a build task that enables critical OSS management scenarios once connected with your WhiteSource account.

  • Secure Your Open Source Usage – automatically detect OSS components and dependencies being used in your project without the need to scan your code using the WhiteSource build task and your repositories

images_risk_report

  • Get Real-Time Alerts on Security Vulnerabilities – get alerted whenever a component with a known security vulnerability is added to your project, or when new vulnerabilities are found in components you’re already using. You can also set up alerts for component licenses based on your pre-defined policies, security fixes, and component versioning

images_TFS_vulnerbility

  • Automat Your Open Source Approval Process – Using pre-defined policies with your WhiteSource account, you can automate the approval or rejection of newly added OSS components based on licenses, vulnerabilities, severe software bugs, quantity of newer versions and more

images_policies

 

HPE Security Fortify VSTS extension

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts

This extension adds 4 build tasks and enables you to leverage HPE Fortify for their two major offerings: Fortify SCA and Fortify on Demand.

img_add-task

HPE Fortify’s SCA provides a security source code analysis using a multitude of security coding rules and guidelines for a broad set of programming languages. There are two build tasks added by this extension that enable Fortify SCA

  • Fortify Static Code Analyzer Installation – You’ll run this task to automatically install the Fortify SCA software to your build agent. You just provide the Fortify license file and this will install unless SCA is already present. It will also configure it with the Fortify rule packs the license entitles you to.
  • Fortify Static Code Analyzer Assessment – This task actually runs the SCA as a build step, leverages all the proper parameters, and can output the results of your scan as build artifacts.

Fortify on Demand delivers security as a service and consists of a static scan that is audited by their team of experts, or a dynamic scan that mimics real-world hacking techniques and attacks using both automated and manual techniques to provide comprehensive analysis of complex Web applications and services. Two build tasks are included in this extension

  • Fortify on Demand Static Assessment – this requests a static assessment as a build step and performs the necessary upload to the Fortify on Demand service. You can be notified based on your own preferences and your results will be in your Fortify Portal
  • Fortify on Demand Dynamic Assessment – this requests a dynamic scan as a build step. Before using this task you’ll need to configure your dynamic scan settings in your Fortify on Demand portal. At the portal, you’ll configure the URL where your application is being deployed and hosted.

Checkmarx

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast

If security at speed is what you’re looking for, give Checkmarx a look. This extension offers Static Source Code Analysis and what separates them from the competition are their features like incremental scanning and best fix location. The ability to only scan new or modified code keeps your build process fast, but still gives peace of mind that you will find your specific security flaws before they become problems. Best fix location even goes as far to highlight where you should fix your code.

To use the build task, you’ll just need to configure a service endpoint with your Checkmarx account to use with the build task.

CxScan_Images_sample1

    Are you using an extension you think should be featured here?

    I’ll be on the lookout for extensions to feature in the future, so if you’d like to see yours (or someone else’s) here, then let me know on Twitter!

    @JoeB_in_NC

    Test & Feedback – Capture your findings

    $
    0
    0

    Test & Feedback extension allows everyone in team, be it developers, testers, product owners, user experience, leads/managers etc. to contribute to quality of the application, thus making it a “team sport”. It enables you to perform exploratory tests or drive your bug bashes, without requiring predefined test cases or test steps. This extension simplifies the exploratory testing in 3 easy steps – capture, create & collaborate. An overview of this extension is captured in this overview blog of Test & Feedback extension.

    In this blog, we will drill into the “Capture” aspect. There are two ways in which this extension captures the data:

    1. Explicit capture – With “explicit” capture, actions are required to be taken. All these actions are exposed on extension tool bar. We will cover – Capture Screenshot, Capture Notes and Capture Screen Recording in details below.
      Test & Feedback - Toolbar
    2. Implicit capture – With “implicit” capture, you are not required to do anything special. Actions triggered by you are automatically capturing required data with basic annotation added. These capture actions include – Image Action Log, Page Load Data and System Information.

    Capture Screenshot

    As you are exploring the web-app, you can capture the entire screen or part of screen as your screenshot. Click on [Test & Feedback - Capture Screenshot] to trigger screen capture. Take a “Fullscreen” or capture part of web page as required. Once it is selected, you can annotate the captured screenshot.

    Test & Feedback - Annotated Screenshot

    You can select the name of the screenshot you like, and also can use shapes from annotation toolbar to draw on cropped image area to show/highlight part of the page. Annotation toolbar provides – drawing, circles/ovals, rectangles, arrows, and add text annotations. It also provides way to “blur” part of image that has some confidential/sensitive information. You can customize the color for all shapes. Save the screenshot by clicking on [Test & Feedback - Save screenshot]. Saving a screenshot will add it automatically to the session and also shows up on the session timeline.

    NoteAll floating elements like tooltips, and dynamic UI components that show up on mouse hover and go-away when mouse is moved away, are not captured using the “screenshot” option. You could instead use the “screen recording” option mentioned below.

    Capture Notes

    Notes can be taken or made as you explore your web-app. Click on [Test & Feedback - Capture notes] to open notes area where notes can be added and saved. Notes are saved with the session timeline. You can even paste “text” from your clipboard into notes area. Notes taken are automatically saved, and are persisted even if browser window closes, or the extension pop-up closes. “Save” the note to add it to the ongoing session.

    Test & Feedback - Capture notes screenshot

    Screen Recording

    Screen recording allows you to capture more continuous activity of web-page navigation. It is only “video” capture, and “audio” capture is not supported. It also addresses scenario where capturing more events than what image action log can capture or capturing floating elements in web-page (like tooltips) is required. Screen recording can also record the data for all desktop (non-browser) applications as well. This is extremely useful if you are testing a desktop app, but still use the extension to report issues using the screen recording.
    Click on [Test & Feedback - Record Screen] to start the recoding, and stop recording.

    1. Start recording
      Test & Feedback - Start screen recording
    2. Select screen or application to record
      Test & Feedback - Select screen to record
    3. Ongoing screen recording status will appear
    4. Stop the recording when done
      Test & Feedback - Stop screen recording

    Capture Image action log

    As you are navigating the web-app, all your mouse clicks, keyboard typing events and touch gestures are captured automatically in the form of “image action log” to give you more context of repro-steps or actions that lead to some specific part of the web-app. Image action log data tracks last 15 events in the context of the ongoing session. Information about captured image action log events is made available during bug and task creation, as well as test case creation. This helps in knowing the steps that led to the bug with just one click at the time of filing it.

    Test & Feedback - Image action log screenshot while filing bug

    A check-box option allows to include or exclude image action log data during bug filing/task creation. Image action log capture is turned on with the install of extension. Extension “Options” page enables configuration of the image action log option.

    In work item form, all image action log images are shown in compact form, but also a full resolution image is added with bug or task for getting complete context. These images are accessible via quick links added in bug repro steps or task description.

    Test & Feedback - Image action log view in work item
    Image action view in work item

    They can be seen by clicking on attachment as well.
    Test & Feedback - Image action attachments in work item
    Image action attachments in work item

    Capture Page Load Data

    Just like the “image action log” captured your actions performed on a web-app being explored, in form of images in the background, the “page load” functionality automatically captures details for a web page to complete the load operation. Instead of relying on subjective/perceived slowness of web page load, you can objectively quantify the slowness in the bug now. Web page load data provides high level snapshot while filing the bug and a more detailed drilldown with timeline graphs added at navigation and resource level in filed bug/task.

    Test & Feedback - Page load data

    Snapshot provides high level information on where the maximum time is spent while loading the page. A detailed report is attached with bug which comprises of Navigation Chart and Resource Chart. A developer will find this information very useful to get started with deeper investigations about performance issues about web-app.

    Test & Feedback - Page load data - developer view

    Option exists with bug and task form to exclude adding page load data when not needed. Also, see extension “Options” to enable or disable this option across of sessions.

    System Information

    With every bug, task and test case filed, “System information” about the browser and machine is added. It captures browser, OS, memory and display information. This helps developer know the config of machine, display properties and OS info to debug issues. This additional diagnostics information is always sent and cannot be turned off.

    Test & Feedback - System Information

    Options

    Settings are exposed for all of the above capture sources to enable or disable them at extension level across of all sessions.

    Test & Feedback - Options

    Now that you are familiar with all capture ways in Test & Feedback extension, we will explore next on how can captured data be used in various “Create” [coming soon] options to create artifacts like bugs, tasks, and test cases.

    Test & Feedback extension – Create artifacts

    $
    0
    0

    In the previous blog “Test & Feedback – Capture your findings“, we discussed the full “Capture” capability of the Test & Feedback extension. Once all the findings have been captured, the next step is to create rich actionable work items that can be consumed by the team. In this blog we will focus on the “Create” step and the various artifacts that are supported by the extension. As you explore the web application, depending on the requirement, a host of work items can be created using the extension – you can report issues by creating bugs or tasks, respond to feedback requests by creating feedback response work items, or create test cases for important scenarios to be tested. The extension displays simple forms that are automatically populated with all the “captures” collected – screenshots, screen recordings, image action log, and page load times, thus allowing for the quick creation of work items.

    As discussed in the “Overview” blog, the Test & Feedback extension has two modes: Standalone and Connected.

    Standalone mode: In the standalone mode, everyone can capture findings using screenshots and notes, and report issues by creating a bug. The captures collected are automatically added as a part of the bug form. All the captured findings and created work items are stored locally, and can be shared with the team in the form of a report.
    Create - Standalone
    Note: Standalone mode is open to all. It doesn’t require a Visual Studio Team Services or Team Foundation Server connection.

    Connected mode: To use the extension in connected mode, you must connect to your Visual Studio Team Services account or Team Foundation Server. This mode supports creation of the full range of work items – bugs, tasks, feedback response work items, and test cases. They are created in the team project selected at the time of connection and are available for the team to consume.

    Users with basic access can create bugs, tasks, and test cases

    Users with basic access can create bugs, tasks, and test cases

    Users with stakeholder access can create feedback responses, bugs and tasks

    Users with stakeholder access can create feedback responses, bugs and tasks

    Let’s go through the details for each type of work item.

    Create Bug

    Teams can use the Test & Feedback extension to file rich actionable bugs during their bug bashes, or while exploring the product. Choosing “Create bug” option opens a simple bug form that contains only two fields – Title and Repro steps. Repro steps is an editable section that is auto-populated with all the collected captures in chronological order – the image action log along with the associated screenshots, videos, notes, and page load data. A check-box in the form allows you to easily include or exclude the entire image action log data and page load data from the bug form. Additionally, you can also choose to delete individual actions or page load tiles, and add additional notes in the Repro steps box. Once an appropriate title is provided you can save the bug form. This will automatically create a bug work item in the team project you are connected to.

    Bugs

    It is a common scenario, especially during bug bashes, for multiple users to come across the same issue and file the same bug, which can result in a lot of duplicate work items in the system. The extension lets you view similar bugs at the time of filing to help avoid this duplication. As you start typing the title of the bug, the extension will search for similar bugs in the background (based on keywords in the title) and show the number of similar bugs that the team have already created. Based on the results, you can decide if the issue found is a new one or an existing one.

    Similar bugs

    You can choose to edit an existing bug, in which case all the collected data is appended to the existing bug, or continue creating a new one

    Create Task

    For in-flight code within a sprint, some teams prefer to create tasks rather than bugs. Choosing “Create task” opens the simple task form, which has two fields – Title and Description. In a similar way to the bug form, the description box is auto-populated with the captures and you can choose to include or exclude data in line with your requirements. On saving the task, the extension creates a new task work item in the team project you are connected to. It becomes a part of the task board, and can be tracked by the team during their standups.

    TAsks

    Create Test Case

    Simultaneous test design and test execution is the basis of true exploratory testing. For important scenarios or for bugs found that need to be validated later, teams typically need to create test cases. The extension makes creation of tests cases easy for such scenarios, based on the user actions (the image action log) generated during testing.

    Choosing “Create test case” opens a simple test case form with a Title field and a Test steps field. The test case form is auto-populated with test steps based on the user actions while exploring the scenario. It also attaches an image of the action with each of the steps. Each test step has three sections: Step attachment, Action, and Expected Result. Action and Expected Result are editable sections where you can add or remove information as required. You can ignore some steps that are not required by unchecking the checkbox provided for each step. Once satisfied with the contents, you can save the test case in the team project you are connected to.

    Test case

    Image Action Log Note Note: Only the most recent actions up to a maximum of 15 will be captured as steps.

    Create Feedback Response

    Product teams often want to solicit feedback from stakeholders on the features or user stories the team is working on prior to release. Only stakeholders connected to Team Services or Team Foundation Server “15” can create work items to respond to the feedback requests received. In addition to bugs and tasks, stakeholders can choose to respond to feedback requests using a feedback response work item. Similar to bugs and tasks, the feedback response work item has a title and free-form editable description field that is auto-populated with all the collected captures in chronological order. The feedback form also has a rating section where stakeholders can provide a rating from 1 to 5 for the scenario or feature for which feedback is being provided.

    Feedback response

    Traceability

    One of the key functions that the Test & Feedback provides is the “Explore work item” capability. This enables end-to-end traceability between any work item you file (such as a bug, task, or test case) during your exploratory testing and the explored work item. If you are responding to a feedback request, all the work items created are automatically linked to the feedback request as well as to the user story or feature on which the feedback was requested. Traceability enables easy tracking and triaging of all issues filed by the entire team for a particular user story or feature.

    Timeline

    The Timeline provides a list of all your activities within the session. All the information captured by you – screenshots, notes, screen recordings, work items created, and work items explored are shown in reverse chronological order. You can view these captures, and open work items by selecting the entry in the Timeline.

    Timeline

    Now that you are familiar with how to capture your findings and create work items using Test & Feedback extension, in our next blog we will explore various “Collaborate” [coming soon] options that are available for teams.

    Maven and Gradle build tasks support powerful code analysis tools

    $
    0
    0

    Over the last few months we have been steadily building up the capabilities of the Maven and Gradle build tasks to offer insights into code quality through popular code analysis tools. We are pleased to announce additional much-requested features that we are bringing to these tasks, which will make it easier to understand and control technical debt.

    Maven Code Analysis fields

    Continuous Integration builds: SonarQube integration feature parity with MSBuild

    Back in July, our Managing Technical Debt planning update for 2016 Q3 announced a plan to support SonarQube analysis in Java to a level that is equivalent with our strong integration for MSBuild. This is well underway and nearing completion: both Maven and Gradle can now perform SonarQube analysis by selecting a checkbox in the build definition. This will create a build summary of issues that are detected.

    We also added the option to break a build when SonarQube quality gates fail. This gives instant feedback and helps you stop the technical debt leak. Finally, there is a new build summary that provides detailed information from SonarQube on why the quality gate failed so that it is easy to identify problems. You can then drill-down and get even more data by navigating to the SonarQube server through the link provided.

    SonarQube Build Breaker

    Broader support for Java-based static analysis tools

    We understand that in the past we lacked integration features for some standalone code analysis tools that are widely used. We have heard your feedback and have added support for three such tools: PMD, Checkstyle and FindBugs. You can enable them simply and quickly through a checkbox in the “Code Analysis” section of your build configuration, and they will run on any agent whether through the Hosted agent pool or on a dedicated agent of your choice (Windows, Linux or Mac!).

    Code Analysis Report

    Towards Full Parity Java/MSBuild: Pull Request with Code Analysis for Java

    For some time we have supported showing you code analysis issues directly on pull requests in Visual Studio Team Services for projects using MSBuild. We hope to support this for Maven and Gradle builds too in future.

    Limitations, Feedback, and Troubleshooting

    If you are working on-premises with TFS 2016, FindBugs support for Gradle will not ship at RTM but will be added in Update 1. For users on Visual Studio Team Services, most of these features are already live and waiting for you, with the rest due to roll out as part of Sprint 107 in the next few weeks.

    As always, we would love to hear from you. Please raise issues and suggestions on the issues tab of the vsts-tasks repository in GitHub: https://github.com/microsoft/vsts-tasks/issues and add the label “Area: Analysis”.

    UML Designers have been removed; Layer Designer now supports live architectural analysis

    $
    0
    0

    We are removing the UML designers from Visual Studio “15” Enterprise. Removing a feature is always a hard decision, but we want to ensure that our resources are invested in features that deliver the most customer value.  Our reasons are twofold:

    1. On examining telemetry data, we found that the designers were being used by very few customers, and this was confirmed when we consulted with our sales and technical support teams.
    2. We were also faced with investing significant engineering resource to react to changes happening in the Visual Studio core for this release.

    If you are a significant user of the UML designers, you can continue to use Visual Studio 2015 or earlier versions, whilst you decide on an alternative tool for your UML needs.
     
    However, we continue to support visualizing of the architecture of .NET and C++ code through code maps, and for this release have made some significant improvements to Layer (dependency) validation. On interviewing customers about Technical Debt, architectural debt, in particular unwanted dependencies, surfaces as being a significant pain point. Since 2010, Visual Studio Ultimate, now Enterprise, has included the Layer Designer, which allows desired dependencies in .NET code to be specified and validated. However, validation only happens at build time and errors only surface at the method level, not at the lines of code which are actually violating the declared dependencies. In this release, we have rewritten layer validation to use the .NET Compiler Platform (“Roslyn”), which allows architecture validation to happen in real-time, as you type, as well as on build, and also means that reported errors are treated in the user experience like any other code analysis error. This means that developers are less likely to write code that introduces unwanted dependencies, as they will be alerted in the editor as they type. Moving to Roslyn, also makes it possible to create a plugin for SonarQube allowing layer validation errors to be reported with other technical debt during continuous integration and code review via pull requests, using the SonarQube build tasks integrated with Visual Studio Team Services. The plugin is on our near term backlog.
     

    If you haven’t tried the Layer Designer before, we encourage you to give it a try. More detail on how to use it is available Live architecture dependency validation in Visual Studio ’15’ Preview 5. And please provide feedback not only on the experience, but also other rules you would like to see implemented.

    Code Search is now Java friendly

    $
    0
    0

    In addition to C#, C, C++, and Visual Basic code, you can now do semantic searches across Java code. Adding to our Java feature set and capabilities, we recently enabled contextual search for Java files in the Code Search extension for Visual Studio Team Services and Team Foundation Server starting with TFS “15”. You can apply code type filters to search for specific kinds of Java code such as definitions, references, functions, comments, strings, namespaces, and more.

    Semantic search for Java enables Code Search to provide more relevant search results. For instance, a file with a match in definition is ranked above a file with a match as method reference. Similarly, matches in comments are ranked lower than references and so on.

    Code Search - Ranking Results

    You can use Code Search to narrow down your results to exact code type matches. Navigate quickly to a method definition to understand its implementation simply by applying the definition filter, or scope the search to references in order to view calls and maximize code reuse. You can filter your search to basetype instances to locate a list of derived classes or scope a search to interface instances.

    As you type in the search box, select functions and keywords from the drop-down list to quickly create your query. Use the Show more link to display all the available functions and keywords. Mix and match the functions as required.

    Code Search - Filter Helper Dropdown

    Alternatively, you can select one or a combination of filters from the list in the left column.

    Code Search - Code Type Filters

    You can type the functions and parameters directly into the search box. The following table shows the full list of functions for selecting specific types or members in your Java code.

    To find code where “term” appears as a

    Search for

    argument

    arg: term

    base type

    basetype: term

    class definition or declaration

    class: term

    class declaration

    classdecl: term

    class definition

    classdef: term

    comment

    comment: term

    constructor

    ctor: term

    declaration

    decl: term

    definition

    def: term

    enumerator

    enum: term

    field

    field: term

    function

    func: term

    function declaration

    funcdecl: term

    function definition

    funcdef: term

    global

    global: term

    header

    header: term

    interface

    interface: term

    method

    method: term

    method declaration

    methoddecl: term

    method definition

    methoddef: term

    namespace

    namespace: term

    reference

    ref: term

    string literal

    strlit: term

    type

    type: term

    typedef

    typedef: term

    union

    union: term

    Code Search is available as a free extension on Visual Studio Team Services Marketplace and Team Foundation Server starting with TFS “15”. Click the install button on the extension description page and follow instructions to enable Code Search for your account. For installation on TFS see Administer Search.

    You can learn more about the Java integration within Visual Studio Team Services at java.visualstudio.com.

    Thanks,
    Search team


    Test & Feedback – Collaborate with your team

    $
    0
    0

    In the previous blogs, we have gone through the first two steps – Capture your findings and Create artifacts. In this blog, we will take you through the third step i.e. Collaborate. Test & Feedback extension provides many ways in which teams can collaborate with one another to drive the quality. You can use the extension to share your findings in the form of a simple session report or to gather additional feedback where necessary. Additionally, you can also connect to your Visual Studio Team Services account or Team Foundation Server “15” to view in one place all the completed sessions and measure the effectiveness of your bug bashes and exploratory testing sessions using the rich insights provided. These collaboration techniques are available to users based on their access levels and the mode in which the extension is used.

    Collaborate using Standalone mode

    As described in the Overview blog, one of the modes supported by the extension is the Standalone mode. No connection to Visual Studio Team Services or Team Foundation Server is needed to use the extension in this mode. As you explore the application, you can capture your findings and create bugs offline. All the captured findings – screenshots, notes and bugs created are stored locally. While using the standalone mode, you can use the session report feature to share your captured findings and reported issues with rest of the team.

    Session Report

    The session report gets generated either on demand by using “Export” capability or automatically at the end of the session. This HTML report can then be easily shared with others as a mail attachment or by using OneNote or SharePoint or in any other way as appropriate. The session report consists of two parts:

    1. Summary of bugs filed
      The first part of the session report provides a list of all the bugs filed while testing along with the details of screenshots and notes that were captured as a part of these bugs.
    2. Session attachments
      This part of the report contains in chronological order the screenshots and notes that were captured while testing the application. If you don’t want to file bugs and are simply capturing your findings or if you have some captures (screenshots and notes) in the session which are not included as a part of any bug, this part of report will help you easily keep a track of them.

    export

     

    Collaborate using connected mode with stakeholder access

    The new feedback flow enabled in Visual Studio Team Services and Team Foundation Server “15” allows teams to use the web access to send feedback requests to stakeholders. Stakeholders can use the Test & Feedback extension to respond to these feedback requests.  The feedback response work items (bugs/tasks or feedback response work item) gets automatically linked to the feedback request. This built-in traceability in the feedback flow allows teams to easily track in one place all the feedback received from different stakeholders. The stakeholders on the other hand can leverage the capabilities provided in the extension to manage all the different feedback requests they receive.

    Note Note: Feedback flow is supported only in Team Services and Team Foundation Server “15”.

    Request feedback from stakeholders on Features/User Stories

    Team members with basic access can now directly request for feedback from stakeholders for features/stories being worked on using the “Request Feedback” option in the WIT form context menu. You only need to fill out a simple feedback form which will send off individual mails to all the selected stakeholders along with the instructions provided in the form.

    RequestFeedback3

    Respond to feedback requests

    Stakeholders can easily respond to the feedback request by clicking on the “Provide feedback” link given in the mail, which will automatically configure the Test and Feedback extension with the selected feedback request. Stakeholders can then use the full capture capabilities of extension to capture their findings and submit their feedback in the form of feedback response or bug or task work items.

    FeedbackResponse2

    To see the list of feedback requests assigned to you click on [Test & Feedback - Capture Screenshot] in the extension. From the list, you can select the feedback request you want to provide feedback on and then quickly start providing feedback. From this page, you can also manage your “Pending feedback requests” by marking them as complete or by declining them and can switch between different types of feedback requests by clicking on the desired radio button.

    feedback_request

    In addition to above flow, stakeholders can also use the extension to provide voluntary feedback. In “Connected mode”, connect into the team project you want to provide feedback to. You can then use the extension to capture your findings and submit feedback in the form of feedback response/bug/task work items.

    Collaborate using connected mode with basic access

    Users with basic access can connect to their Team Services account or Team Foundation Server “15” to view the “Session Insights” page. This page allows users to view all completed sessions at an individual or team level at one place thus allowing them to collaborate with one another as a team. The page provides important summary level data like the total work items explored and created, total time spent across all sessions and the total number of session owners. Users can scope down the data by selecting the “period” they are interested in and grouping the data on various pivots like sessions, explored work items and session owners. Depending on their need teams can use session insights page to derive various kinds of insights.

    Note Note: Click on “Recent exploratory sessions” in the Runs tab under Test hub to view “Session Insights” page. Alternatively you can also directly navigate to the insights page from the extension by clicking on [icojn2] in the Timeline.   

    As mentioned in the Overview blog, one of the major scenarios that the extension supports is the bug bash scenario. The Session insights enable users to leverage the end-to-end bug-bash scenario which includes running the bug bash, triaging the bugs filed and finally measuring the effectiveness of the bug bashes conducted.

    bugbash-scenario.fw

    To run the bug bash, team leaders can specify the features and user stories they want to bash. Team members can bash the user story assigned to them by associating it with their session and exploring the application based on the user acceptance criteria provided if any. Users can also explore multiple work items in the same session. Once the bug bash is complete, team can view all the completed sessions in the “recent exploratory sessions” page on Test > Runs hub by changing the pivot to “Sessions”. Using the inline details page, you can easily triage the bugs found during the bug bash and assign them owners and appropriate priority. Finally, team leaders can also measure the effectiveness of the bug-bashes by viewing the amount and quality of exploratory testing done for each of the features and user stories. In addition to this they can also leverage the “Query” support to identify the user stories and features not explored. This data helps team leaders identify gaps in testing and can help them in making decisions regarding the quality of the features being shipped.

    unexplored-work-items

    Microsoft Teams integration with Visual Studio Team Services

    $
    0
    0

    VSTS + Teams

    Earlier today, Microsoft Teams was announced. Microsoft Teams is a new chat-based workspace in Office365 that makes collaborating on software projects with Team Services a breeze. Customers often tell us that there is a need for better chat integration in Team Services. With Microsoft Teams, we aim to provide a comprehensive chat and collaboration experience, across your Agile and development work.

    Starting today, Team Services users can stay up to date with alerts for work items, pull requests, commits, and builds using the Connectors within Microsoft Teams. Each Connector event is its own conversation, allowing users to be notified of events they care about and discuss them with their team.

    VSTS Connectors

    We are also bringing the Team Services Kanban boards right into Microsoft Teams, allowing your team to track and create new work items without leaving your team’s channel. The board integration will be available starting next week on November 9. Each board also comes with its own conversation.

    teams-kanbanboard

    Instructions on how to set up these integrations can be found on the Team Services marketplace.

    We’re still early in our collaboration with Microsoft Teams. I am looking for your feedback on the current integrations as well as feedback on new integrations you’d like to see between Team Services and Microsoft Teams – just leave a comment or send me an email.

    Git perf and scale

    $
    0
    0

    New features and UI changes naturally get a lot of attention. Today, I want to spotlight the less visible work that we do on Team Services: ensuring our performance and scale meet our customers’ needs now and in the future. We are constantly working behind the scenes profiling, benchmarking, measuring, and iterating to make every action faster. In this post, I’ll share 3 of the dozens of improvements we’ve made recently.


    First up, we’ve sped up pull request merges significantly. We have an enormous “torture test repo” (tens of GBs across millions of files and 100K+ folders) we use for perf and scale testing. Median merge time for this repo went from 92 seconds to 33 seconds, a 64% reduction. We also saw improvements for normal-sized repos, but it’s harder to generalize their numbers in a meaningful way.

    Several changes contributed to this gain. One was adopting a newer version of LibGit2. Another was altering LibGit2’s caching strategy – its default wasn’t ideal for the way we run merges. As a customer, you’ll notice the faster merges when completing PRs. For our service, it means we can serve more users with fewer resources.


    An engineer on a sister team noticed that one of our ref lookups exhibited O(N) behavior. Refs are the data structure behind branches in Git. We have to look up refs to display branch names on the web. If you’re familiar with time complexity of algorithms, you’ll recall that O(N) behavior means that the work done by a program scales linearly with the size of the input.

    The work done in this particular lookup scaled linearly with the number of branches in a repository. Up to several hundred refs, this lookup was “fast enough” from a human’s point of view. Humans are quite slow compared to computers 😉

    Every millisecond counts in web performance, and there’s no reason to do excess work. We were able to rewrite that lookup to be constant with respect to the number of branches.


    The last improvement requires a bit more explanation. At various points in our system, we need to track the history of a file: which commits touched this file? Our initial implementation (which served us well for several years) was to track each commit in a SQL table which we could query by file path or by commit.

    Fast forward several years. One of the oldest repos on our service is the one which holds the code for VSTS itself. The SQL table tracking its commits had grown to 90GB (many, many times the size of the repo itself). Even after the usual tricks like schema changes and SQL page compression, we weren’t able to get the table size down to an acceptable level. We needed to rethink the problem.

    The team spent 3+ months designing and implementing a fast, compact representation of the Git graph. This representation is small enough to keep in memory on the application tier machines, which themselves are cheaper to operate than SQL machines. The change was carefully designed and implemented to be 100% transparent to end customers. Across a variety of measurements, we found no noticeable performance regressions and in many cases saw improvements.

    We were able to completely drop the commit change tracking table, freeing up dozens of gigabytes on every scale unit’s database tier. We finished migrating to the new system over 2 months ago. Besides a handful of incidents during early dogfooding, we have not received complaints about either its performance or correctness. (I’m flirting with chaos making such claims, of course. If you have a scenario where performance regressed since the beginning of September, email me so we can investigate.)

    This explanation leaves out a lot of details in favor of brevity. If there’s interest, we’re thinking of doing a series of blog articles on how our Git service works under the hood. Let me know in the comments what you want to hear more about.

    Thanks to the VC First Party team [Wil, Jiange, Congyi, Stolee, Garima, Saeed, and others] for their insights on this topic. All remaining errors are mine alone.

    How to use Test Step using REST Client Helper?

    $
    0
    0

    Test Case is the backbone for all manual testing scenarios. You can create test case using the web client from Test or Work hubs OR from Microsoft Test Manager (MTM), which then are stored in Team Foundation Server or Visual Studio Team Services. Using these clients you can create test artifacts such as test cases with test steps, test step attachments, shared steps, parameters, shared parameter. Test case is also a work item and using Work Item REST API support one can create a work item of type test case, see here: Create a work item.

    Problem

    Until this release, there is no support to modify/update test steps in a test case work item. Work item saves test steps, associated test step attachment, or expected results in custom XML document, and there is a need of helper to create that custom XML for test steps updates.

    Solution

    With the current deployment, we have added support to Create/Read/Update/Delete test step (action, and expected result) and test step attachments. ITestBase interface exposes required key method – loadActions and saveActions that provide helper methods for both in C# and JS to do above mentioned operations.

    Requirement

    C# Client (Microsoft.TeamFoundationServer.Client) as released in previous deployment.
    OR
    JS Client (vss-sdk-extension) (Note: JS changes will be available only after current deployment completes.)

    Walk through using new helper in C# client

    Here, let’s walk through step-by-step on how to consume these newly added helper classes. We have also added GitHub sample for the same with some more operations (link given at the bottom of the post).

    1. Create an instance of TestBaseHelper class and generate ITestBase object using that.
      TestBaseHelper helper = new TestBaseHelper();
      ITestBase testBase = helper.Create();
      
    2. ITestBase exposes methods for create test step, generate xml, save actions and load actions. You can even assign title, set expected result and description with each test step and associate attachment using attachment URL. In the end, all test steps are added to actions associated with testBase object (see below).
      ITestStep testStep1 = testBase.CreateTestStep();
      testStep1.Title = "title1";
      testStep1.ExpectedResult = "expected1";
      testStep1.Description = "description1";
      testStep1.Attachments.Add(testStep1.CreateAttachment(attachmentObject.Url, "attachment1"));
      
      testBase.Actions.Add(testStep1)
      
    3. A call to SaveActions uses the helper classes and calls appropriate field setting of the test case – Test Steps to save newly added steps, expected result and attachment links. A JSON patch document created using “SaveActions” is used to createWorkItemAsync as shown below.
      JsonPatchDocument json = new JsonPatchDocument();
      
      // create a title field
      JsonPatchOperation patchDocument1 = new JsonPatchOperation();
      patchDocument1.Operation = Operation.Add;
      patchDocument1.Path = "/fields/System.Title";
      patchDocument1.Value = "New Test Case";
      json.Add(patchDocument1);
      
      // add test steps in json
      // it will update json document based on test steps and attachments
      json = testBase.SaveActions(json);
      
      // create a test case
      var testCaseObject = _witClient.CreateWorkItemAsync(json, projectName, "Test Case").Result;
      
    4. To modify a test case and its steps, you need to get the test case and just call “LoadAction” which internally uses helper class to parse the given xml and attachmentlinks as shown below. This will populate the testBase class with all details as appropriate.
      testCaseObject = _witClient.GetWorkItemAsync(testCaseId, null, null, WorkItemExpand.Relations).Result;
      
      // initiate testbase object again
      testBase = helper.Create();
      
      // fetch xml from testcase object
      var xml = testCaseObject.Fields["Microsoft.VSTS.TCM.Steps"].ToString();
      
      // create tcmattachemntlink object from workitem relation, teststep helper will use this
      IList tcmlinks= new List();
      foreach (WorkItemRelation rel in testCaseObject.Relations)
      {
          TestAttachmentLink tcmlink = new TestAttachmentLink();
          tcmlink.Url = rel.Url;
          tcmlink.Attributes = rel.Attributes;
          tcmlink.Rel = rel.Rel;
          tcmlinks.Add(tcmlink);
      }
      
      // load teststep xml and attachemnt links
      testBase.LoadActions(xml, tcmlinks);
      
    5. Once testBase object has been loaded with test case information, you can update test steps and attachments in the test case object.
      ITestStep testStep;
      //updating 1st test step
      testStep = (ITestStep)testBase.Actions[0];
      testStep.Title = "New Title";
      testStep.ExpectedResult = "New expected result";
      
      //removing 2nd test step
      testBase.Actions.RemoveAt(1);
      
      //adding new test step
      ITestStep testStep3 = tb.CreateTestStep();
      testStep3.Title = "Title 3";
      testStep3.ExpectedResult = "Expected 3";
      testBase.Actions.Add(testStep3);
      
    6. Update test case object using new changes in the test steps and attachments.
      JsonPatchDocument json2 = new JsonPatchDocument();
      json2 = testBase.SaveActions(json2);
      // update testcase wit using new json
      testCaseObject = _witClient.UpdateWorkItemAsync(json2, testCaseId).Result;
      

    As shown above, you can now use the helper classes provided to update test case steps, and still use the existing Work Item REST APIs for test case work item. You can find comprehensive samples for both C# and JS here on GitHub project: RESTApi-Sample.

    – Test Management Team

    Issue with using Application Insights with load tests

    $
    0
    0

    If you use Application Insights to collect app side metrics during load tests, you will find that it currently doesn’t work as expected.

    1. When configuring applications to collect app side metrics in the load test editor in Visual Studio, you will see an error similar to the below:
    2. aiIf you have already configured Application Insights earlier for your load tests, then when running such load tests, you will not see any app metrics being collected and the ‘Status Messages’ in your load test will show a message that ‘Application counter collection failed due to an internal error and will be disabled for this run’.

    This is due to an infrastructure issue. We are working with the Application Insights (AI) team to understand and resolve the issue. At this time, we don’t have an ETA for a resolution. In the meantime to workaround this issue, you can view the application metrics using the Azure portal or use the APIs documented at https://dev.applicationinsights.io/quickstart/

    If you have any questions related to load testing, please reach out to us at vsoloadtest@microsoft.com

    Best of Both Worlds

    $
    0
    0

    Back in February of 2015, I wrote a blog asking a very simple question: how many vendors does it take to implement DevOps? At the time I wrote the post, I felt the answer was one. Almost two years later, I believe that now more than ever. So why do companies insist on manually building a pipeline instead of using a unified solution?

    Fear of Vendor Lock In

    Despite the fact some vendors offer a complete solution, many still attempt to build DevOps pipelines using as many vendors as possible.

    Historically, putting all your eggs in one basket has proved to be risky. Because the systems only provided an All or Nothing approach, users would lose the flexibility to adopt new technology as it was released. The customer was forced to wait for the solution provider to offer an equivalent feature or worse have to start over again with another solution. Customers started to avoid the benefits of a unified solution for flexibility.

    This allowed the customer to adapt the hot new technology and be on the bleeding edge with their pipeline. They could evaluate each offering and select the best of breed in each area. On the surface this seemed like a great idea until they realized the products did not play well together. By this point, they had convinced themselves the cost of integration was unavoidable and just a cost of doing business.

    This change in customer mindset had vendors focusing on having the best CI system or source control instead of an integrated system. With vendors only focusing on a part of the pipeline, there were great advances in each area. However, the effort to integrate continued to increase at an alarming rate. Eventually the cost of maintaining the pipeline became too great and actually started to have a noticeable impact on developer productivity.

    Even when all the products play nice with each other, it can be difficult to enable good traceability from a code change all the way to production. This is the reason more and more vendors are starting to expand their offerings to reduce the cost and risk of integration.

    Calculate True Cost of Ownership

    When building your DevOps pipeline, you have to consider the true cost of ownership. The cost is much more than what you paid for the products. The cost includes the amount of time and effort to integrate and maintain them. Time spent on integration and maintenance is time not spent innovating on the software that makes your company money. To try and reduce the cost of ownership, vendors have begun to join forces (http://www.businesswire.com/news/home/20160914005298/en/DevOps-Leaders-Announce-DevOps-Express-Industry-Initiative). This should help mitigate the cost of building your own DevOps pipeline. Nevertheless, with each new tool and vendor, you incur a cost of integration. Someone on your team is now responsible for maintaining that pipeline. Making sure all products are upgraded and that the integration is still intact. This is time much better spent delivering value to your end users instead of maintaining your pipeline.

    Adding vendors also complicates your billing, as you are paying multiple vendors instead of one. The opportunity for bundle or volume discounts is also reduced.

    Best of Breed

    I have meet many customers that claim they want the best of breed products. However, when I asked what made one product better than another, I often found that they did not even use that feature. They were complicating their pipeline out of vanity. Everyone else said this was the best so we wanted to be on the best. You need to find the best product for you which might not be the best of breed for that area. Just because Product A does not have all the bells and whistles as Product B does not mean Product B is the right one for you.

    Best of Both Worlds

    Today, customers want the ease of a unified solution with the ability to select best of breed. Solutions like Team Services offer you both. Even though Team Services offers everything you need to build a DevOps pipeline from source control and continuous integration to package management and continuous delivery, you are free to replace each piece with the product of your choice. If you already have an investment in a continuous integration system, you can continue to use it along with everything else Team Services has to offer. This can go a long way towards reducing the number of vendors in your pipeline.

    We have taken a new approach with Team Services. It is an approach that tries to appeal to both types of customers: those that want a unified solution and those that want Best of Breed. We have teams dedicated to Agile planning, source control, continuous integration, package management, and continuous delivery. These teams work to make sure we stack up against all the offerings of each category. However, they never lose sight of the power of a unified system.

    This approach reduces the complexity of building and maintaining your pipeline while retaining your flexibility to select products that are the best fit for your organization.

    Import your TFS Database into Visual Studio Team Services

    $
    0
    0

    Since I have started in role on the Visual Studio Team Services & Team Foundation Server teams, I have been looking forward to the day that we could help TFS customers successfully migrate all of their data to our SaaS-based hosted TFS service:  Visual Studio Team Services.  It has been by far one of our more popular feature requests on User Voice as well.

    I am joined by so many on the team who have been waiting on this moment!   We are very excited to announce the Preview of the TFS Database Import Service for Visual Studio Team Services.

    In the past, we have had various different options that offered a low-fidelity method for migrating your data.  The difference today is that the TFS Database Import Service is a high-fidelity migration that brings over your source code history, work items, builds, etc. and keeps the same ID numbers, traceability, settings, permissions personalizations, and much more.  Our goal for your final production import is that your team will be working in TFS on a Friday and then be continuing their work in Visual Studio Team Services when they come back to work on Monday.

    TFS to VSTS Migration Diagram

    Our engineering teams have put a lot of work into building this service over the past year and has touched every part of our engineering organization.  We have been running a Private Preview for many months now and have learned along the way how to make the Import Service even better.  You may have even heard of some stories about some of our more notable enterprise customers who have migrated and adopted Team Services.  We have even noticed a transition in our customer discussions in the past year where many including large enterprises are asking about the best method for migrating to Visual Studio Team Services.

    Some of the main reasons that they tell us they want to migrate from Team Foundation Server to Visual Studio Team Services are:

    • No more manual upgrades and get updates quicker – with Team Services, upgrades are deployed nearly every three weeks and your development teams can immediately take advantage of them months before they are available in TFS updates or major releases.
    • Significantly reduced administration – imagine not needing to continually monitor and administer your TFS infrastructure.  We take care of that for you.
    • Accessible anywhere – Your team members will have the flexibility they need to securely access Team Services from work, home, remote offices, or their mobile devices.
    • Included with Visual Studio Subscriptions – For many of our developer customers, they already have Visual Studio Team Services included as a benefit of their Visual Studio (formerly known as MSDN) subscriptions!  Having an ability to migrate your TFS database allows many subscribers now to take advantage of that important benefit.
    • And many more…

    TFS to Visual Studio Team Services Migration Guide

    To help you plan your migration project, we have a full-color Migration Guide that walks you through every step of the process including how to get help, finding a partner, helping you with prerequisites, validating your environment, and ultimately queuing your imports.  We have designed it to include the checklists, worksheets, and pointers to additional documentation that you’ll need but also allowing to stay focused on the current step in the migration.  The migration guide as a project plan for any size of organization but smaller teams will notice they will be able to skip several parts!

    To get started with your migration project, we recommend downloading the TFS to Visual Studio Team Services Migration Guide.

    TFS to VSTS Migration Guide Cover

    Invitation Codes

    During the Public Preview of the TFS Database Import Service, we will be providing invitation codes that you can use with the TFS Migration tooling to queue both a dry run import as well as your production import.  You can find out more about how to request invitation codes with our Preview questionnaire in Phase 1 of the Migration Guide.

    DevOps Consulting Services Partners

    Making sure that you have trained consulting partners to help you with your migration project is very important to us.  We held our first Global Partner Bootcamp in Redmond, Washington last Friday with many of our DevOps Partners.  We feel confident that they are equipped to help you with successfully importing your TFS database and migrating your team to Visual Studio Team Services.  In addition to our Microsoft Partner community, you can also reach out to your Microsoft Premier Support or Microsoft Consulting Services contact as well as many of our awesome DevOps Microsoft MVPs to help you with your migration project.

     

    Download the Migration Guide today to get started.  We cannot wait to help you with migrating your teams to Visual Studio Team Services!

     

    Take care,

    Ed Blankenship
    Product Manager, Visual Studio Team Services & Team Foundation Server

     

    A big thanks to Rogan Ferguson, Mario Rodriguez, Dan Hellem, and the many engineers who have put in some amazing time & effort to ship the TFS Database Import Service!


    Test result storage improvements and impact on upgrading to Team Foundation Server 2017

    $
    0
    0

    With the Team Foundation Server 2017 now available, TFS administrators will be planning to upgrade their existing TFS installations to this new version. As admins plan this activity, we wanted to discuss an important TFS database schema improvement that is rolling out with TFS 2017.

    What is the change?
    With TFS 2017, the test results generated from automated and manual testing will be stored in a more compact and efficient format, resulting in reduced storage footprint for TFS collection databases. With testing in Continuous Integration (CI) and Continuous Deployment (CD) workflows gathering momentum, this change will translate to meaningful SQL storage savings to customers whose automated test environments generate thousands of test results each day.

    What is the impact of this change?
    A new schema for test results with TFS 2017 means existing test result data must be migrated when you upgrade your TFS server to the new version. Given the scale of data migration, you will encounter longer-than-normal upgrade times depending on the amount of test data you have in your TFS collections. For most small to medium sized TFS collections (under 100 GB), the impact will not be noticeable. Your upgrade will take few hours longer. However, if the test result data in your TFS collection is more than 100GB, then you must plan for a longer-than-usual upgrade window.

    How do we reduce the time taken to upgrade to TFS 2017?
    Here are the guidelines that will help you reduce the TFS upgrade window time:

    • Lesser the data to migrate, faster the upgrade. We recommend cleaning up old test results in your system by configuring test retention policy. Details about the retention policy are available in this blog: https://blogs.msdn.microsoft.com/visualstudioalm/2015/10/08/test-result-data-retention-with-team-foundation-server-2015 and the steps to configure retention are available in the documentation. Note that retention does not clean up test results instantaneously. The retention policy is designed to gradually free up space by deleting test results in batches to prevent any performance impact on your TFS instances. As such, make sure you configure retention right way, and have a buffer period of few weeks with retention enabled before you upgrade.
    • If you cannot wait to the test retention policy to gradually clean up test results, you have a second option of cleaning up test results just before the upgrade. You need to install TFS (TFS 2017 or later) and then run the TFSConfig.exe tool to clean up test results. Note that you need to run tool against TFS collections when they are offline, during the window after installing TFS but before starting the upgrade wizard. Most importantly, remember to configure the test retention policy even after cleaning up test results using the TFSConfig.exe tool to prevent unbounded growth of test result data in future.
    • It’s recommended to try out the upgrade on a pre-production environment, before upgrading your production TFS instances. Given that pre-production environments typically have lower hardware capacity than production environments, upgrade may take longer on pre-prod environments when compared to production environments. Make sure you have a backup of your TFS collection databases when the production instances are upgraded.
    • If you still see prolonged upgrade times, reach out to us either via customer support or drop a mail to devops_tools@microsoft.com and we’ll be glad to help.

    What kind of gains can we expect with the test result schema improvements?
    The gain varies depending on your mix of automated v/s manual testing – more the number of test results, implying higher the frequency of test execution, higher the gains. We observed a 5x-8x reduction in storage used by test results with the new schema across various TFS collections we tested. For the Visual Studio Team Services account used by the TFS development team itself, the Test Result footprint reduced from 80GB to 10GB after upgrading to the new schema. Owing to a reduced data footprint, we have also achieved modest performance gains with this new schema.

    What is the impact for teams using Visual Studio Team Services?
    For Visual Studio Team Services accounts, the data will be migrated to the new schema in a phased manner. The migration will be transparent to users without any interruptions or down time. Basically, you won’t notice any change in the way you run tests or analyze test results.

    What are the improvements that make the new schema for test results more efficient?
    The existing test result storage employed a flat schema design which was motivated by manual testing scenarios with Microsoft Test Manager. This design was extended to automated testing as we added capabilities like Lab BDT with XAML and on demand automated testing with MTM. As adoption of automated testing in Build (Continuous Integration) and Release (Continuous Deployment) grew, we witnessed sizable growth test result data. In many TFS collection databases, where customers are invested heavily in automated testing, we observed that test results storage was, by far, the largest user of storage space. With this update we are optimizing the schema for automated testing by moving from a flat schema to normalizing the schema. The new schema has an automated test case reference object that stores all test meta data like test method name, container, priority, owner, etc. – data that does not change with each test result. The test results table will contain only the fields that change with each test result like outcome, start date time, duration, machine name, etc. and point to the automated test case reference for meta data. With these redesigned tables, we have significantly reduced data duplication and eliminated the numerous indexes that existed in the flat schema, yielding the new schema 5x-8x efficient in terms of storage space.

    If you have any questions or need any help, please drop us a mail at devops_tools@microsoft.com.

    Thank you,
    Manoj Bableshwar – Visual Studio Testing Tools team

    Package Management is generally available: NuGet, npm, and more

    $
    0
    0

    Today, I’m proud to announce that Package Management is generally available for Team Services and TFS 2017! If you haven’t already, install it from the Visual Studio Marketplace.

    Best-in-class support for NuGet 3

    NuGet support in Package Management enables continuous delivery workflows by hosting your packages and making them available to your team, your builds, and your releases. With best-in-class support for the latest NuGet 3.x clients, Package Management is an easy addition to your .NET ecosystem. If you’re still hosting a private copy of NuGet.Server or putting your packages on a file share, Package Management can remove that burden and even help you migrate.

    To get started with NuGet in Package Management, check out the docs.

    npm in Package Management

    Package Management was never just about NuGet. Accordingly, the team has been hard at work over the last few months adding support for npm packages. If you’re a developer working with node.js, JavaScript, or any of its variants, you can now use Team Services to host private npm packages right alongside your NuGet packages.

    npm is available to every Team Services user with a Package Management license. To enable it, simply install Package Management from the Marketplace, if you haven’t already, then check out the get started docs.

    npm support will also be available in TFS 2017 Update 1. Keep an eye on the features timeline for the latest updates.

    npm in Package Management

    GA updates: pricing, regions, and more

    If you’ve been using Package Management during the preview period, you’ll now need to purchase a license in the Marketplace to continue using it. Your account has automatically been converted to a 60-day trial to allow ample time to do so. Look for the notice bar in the Package Management hub or go directly to the Users hub in your account to buy licenses.

    The pricing for Package Management is:

    • First 5 users: Free (but licenses for these users must still be acquired through the Marketplace)
    • Users 6 through 100: $4 each
    • Users 101 through 1000: $1.50 each
    • Users 1001 and above: $0.50 each

    The Package Management extension is also included with these Visual Studio subscriptions:

    • Visual Studio Enterprise – monthly
    • Visual Studio Enterprise – annual
    • Visual Studio Enterprise with MSDN

    See our pricing calculator for further information. Only Visual Studio Team Services users (not stakeholders) can be assigned the Package Management extension.

    Finally, Package Management is now also available in the India South and Brazil South regions.

    What’s next?

    With the launch of Package Management in TFS 2017, the team is now fully focused on adding additional value to the extension. Over the next year, we’ll be investing in a few key areas:

    • Package lifecycle: we want Package Management to serve not just as a repository for bits, but also as a service that helps you manage the production and release of your components. Accordingly, we’ll continue to invest in features that more closely integrate packages with Team Build and with Release Management, including more investments in versioning and more metadata about how your packages were produced.
    • Dependency management: packages come from everywhere: NuGet.org, teams across your enterprise, and teams in your group. In a world where there’s always pressure to release faster and innovate more, it makes sense to re-use as much code as possible. To enable that re-use, we’ll invest in tooling that helps you understand where your dependencies are coming from, how they’re licensed, if they’re secure, and more.
    • Refreshed experience: when we launched Package Management last November, we shipped a simple UX that worked well for the few scenarios we supported. However, as we expand the service with these new investments, we’ll be transitioning to an expanded UX that more closely matches the rest of Team Services, provides canvases for partners to extend Package Management with their own data and functionality, and gives us room to grow.
    • Maven/Ivy: as the rest of the product builds ever-better support for the Java ecosystem, it follows that Package Management should serve as a repository for the packages Java developers use most. So, we’ll be building support for Maven packages into Package Management feeds.

    Announcing Code Search on Team Foundation Server 2017

    $
    0
    0

    Code Search is the most downloaded Team Services extension in the Marketplace! And it is now available on Team Foundation Server 2017!

    Code Search provides fast, flexible, and accurate search across your code in TFS. As your code base expands and is divided across multiple projects and repositories, finding what you need becomes increasingly difficult. To maximize cross-team collaboration and code sharing, Code Search can quickly and efficiently locate relevant information across all your projects in a collection.

    Read more about the capabilities of Code Search here.

    Understand the hardware requirements and software dependencies for Code Search on Team Foundation Server 2017 here.

    Configuring your TFS 2017 server for Code Search

    1. You can configure Code Search as part of your production upgrade via the TFS Server Configuration wizard:

    configuresearchdetails

    2. Or you can complete your production upgrade first and subsequently configure Code Search through the dedicated Search Configuration Wizard:

    searchwizard

    3. To try out Code Search, you can use a pre-production TFS instance and carry out a pre-production upgrade. In this case, configure Code Search after the pre-production upgrade is complete. See step 2 above.

    4. You can even configure Code Search on a separate server dedicated for Search. In fact we recommend this approach if you have more than 250 users or if average CPU utilization on your TFS server is higher than 50%.

    remoteinstall

     

    Got feedback?

    How can we make Code Search better for you? Here is how you can get in touch with us

     

    Thanks,
    Search team

    Announcing Public Preview for Work Item Search

    $
    0
    0

    Today, we are excited to announce the public preview of Work Item Search in Visual Studio Team Services. Work Item Search provides fast and flexible search across all your work items.

    With Work Item Search you can quickly and easily find relevant work items by searching across all work item fields over all projects in an account. You can perform full text searches across all fields to efficiently locate relevant work items. Use in-line search filters, on any work item field, to quickly narrow down to a list of work items.

    Enabling Work Item Search for your Team Services account

    Work Item Search is available as a free extension on Visual Studio Team Services Marketplace. Click the install button on the extension description page and follow instructions displayed, to enable the feature for your account.
    Note that you need to be an account admin to install the feature. If you are not, then the install experience will allow you to request your account admin to install the feature. Work Item Search can be added to any Team Services account for free. By installing this extension through the Visual Studio Marketplace, any user with access to work items can take advantage of Work Item Search.

    You can start searching for work items using the work item search box in the top right corner. Once in the search results page, you can easily switch between Code and Work Item Search.
    workitem-search

    Search across one or more projects

    Work Item Search enables you to search across all projects, so you can focus on the results that matter most to you. You can scope search and drill down into an area path of choice.
    search-across-all-projects

    Full text search across all fields

    You can easily search across all work item fields, including custom fields, which enables more natural searches. The snippet view indicates where matches were found.

    Now you need not specify a target work item field to search against. Type the terms you recall and Work Item Search will match it against each work item field including title, description, tags, repro steps, etc. Matching terms across all work item fields enables you to do more natural searches.
    Search across all fields

    Quick Filters

    Quick inline search filters let you refine work items in seconds. The dropdown list of suggestions helps complete your search faster. You can filter work items by specific criteria on any work item field. For example, a search such as “AssignedTo: Chris WorkItemType: Bug State: Active” finds all active bugs assigned to a user named Chris.
    Quick Filters

    Rich integration with work item tracking

    The Work Item Search interface integrates with familiar controls in the Work hub, giving you the ability to view, edit, comment, share and much more.
    integration-with-work-item-tracking

    Got feedback?

    How can we make Work Item Search better for you? Here is how you can get in touch with us

     
    Thanks,
    Search team

    Announcing general availability of Release Management

    $
    0
    0

    Today we are excited to announce the general availability of Release Management in Visual Studio Team Services. Release Management is available for Team Foundation Server 2017 as well.

    Since we announced the Public Preview of Release Management, we have been adding new features continuously and the service has been used by thousands of customers whose valuable feedback has helped us improve the product.

    Release Management is an essential element of DevOps that helps your team continuously deliver software to your customers at a faster pace and with high quality. Using Release Management, you can automate the deployment and testing of your application to different environments like dev, test, staging and production. You can use to deploy to any app platform and target On-Premises or Cloud.

    Continuous delivery Automation flow

    Release management works cross-platform and supports different application types from Java to ASP.Net and NodeJs. Also Release Management has been designed to integrate with different ALM tools as well to customize release process. For example, you can integrate Release Management with Jenkins and Team City builds or you can use Node.js sources from Github as artifacts to deploy directly. You can also customize the deployments by using the automation tasks that are available either out of the box or write a custom automation task/extension to meet your requirements.

    Automated deployments

    You can design and automate release pipelines across your environments to target any platform and any application by using Visual Studio Release Management. You can trigger release as soon as the build is available or even schedule it. Automated pipeline helps you to get faster time to market and respond with greater agility to customer feedback.

    release-summary

    Manual or automated gates for approval workflows

    You can easily configure deployments using pre or post deployment approvals – completely automated to dev/test environments and manual approvals for production environments. Automatic notifications ensure collaboration and release visibility among team members. You get full audit-ability of the releases and approvals.

    RM approvals

    Raise the quality bar with every release

    Testing is essential for any release. You can ship with confidence by configuring testing tasks for all of your release check points – performance, A/B, functional, security, beta testing and more. Using “Manual Intervention” you can even track and do manual testing in the automated flow.

    Release Quality

    Deploying to Azure is easy

    Release Management makes it very easy to configure your release with built in tasks and easy configuration for deploying to Azure. You can deploy to Azure Web Apps, Docker containers, Virtual Machines and more. You can also deploy to a range of other targets like VMware, System Center Virtual Machine Manager or servers managed through some other virtualization platform.

    End to end traceability

    Traceability is very critical in releases, you can track the status of releases and deployments including commits and work items in each environment.

    Refer to documentation to learn more about Release Management.

    Try out Release Management in Visual Studio Team Services.

    For any questions, comments and feedback – please reach out to Gopinath.ch AT microsoft DOT com.

    Thanks

    Gopinath

    Release Management Team

    Twitter: @gopinach

     

    Viewing all 607 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>