Technical News


Dec 07 2018

Commit Assistant: Ubisoft Learning Bot

CommitAssistant

What if a development bot could help you detect software bugs automatically, then provide probable causes for each issue along with fixes suggestions? Identifying patterns in past bugs to better intercept new bugs might save significant debugging time and cost to software development teams.

At Ubisoft La Forge Research Lab in Montreal, Technical Architect Mathieu Nayrolles collaborates on such a learning bot with Concordia University expert Abdelwahab Hamou-Lhadj at the Electrical and Computer Engineering Department. Using the innovative CLEVER approach, they can detect commits that are likely to introduce bugs, with an average of 79.10% precision and a 65.61% recall. 

CLEVER combines code metrics, clone detection techniques, and project dependency analysis to detect risky commits within and across projects. CLEVER operates at commit-time, before the commits reach the central code repository. Also, because it relies on code comparison, CLEVER does not only detect risky commits but also makes recommendations to developers on how to fix them. 

You can find more details on the risky commit detector online:

Nov 16 2018

DSpot Study on Ten Mature Open Source Projects

Improving existing Java test cases and give the improvements back to developers as patches or pull requests. Indeed, the idea is attractive. But is it yet an efficient and proven code optimisation process? 

A scientific paper from Benjamin Danglot (Inria), co-signed with three more STAMP project contributors, tickles us to think so.
The PhD candidate provides a thorough study on ten notable and mature open source projects where all test methods from 40 unit test classes have been amplified by DSpot. This proves the STAMP tool ability to strengthen real unit test classes in Java. 

More test automation will be offered in the future, requiring more understanding and comparison of test purposes. Moreover, DSpot can be placed in a continuous integration service (CI) where test classes would be amplified on-the-fly. This would greatly improve the industrial applicability of this software engineering research, conclude the authors.

Nov 13 2018

How software code perturbation can strengthen its reliability?

A recent IEEE blog article, by a group of researchers involved in the STAMP project reveals that, facing state perturbations, software might be more stable and reliable than expected.

Equilibrium

This fascinating phenomenon is called "Correctness Attraction" in reference to the concepts of “stable equilibrium” and “attraction basin” in physics. It underlines input points for which a software system eventually reaches the same fixed and correct point according to a perturbation model.

Moreover, this could lead to new “bug absorbing zones” in software applications where software engineering techniques would improve the correctness attraction.

Discover the reasons behind correctness attraction in this blog post:

Nov 06 2018

Luc Esape, artificial software fixer, unmasked by The Register

luc

Luc Esape, aka Repairnator, is unmasked! The Java software fixer recently earned a world class reputation as a smart bot, thanks to an article posted on The Register by Thomas Claburn, a real editor based in San Francisco (California). 

The Register article is entitled: The mysterious life of Luc Esape, bug fixer extraordinaire. His big secret? He's not human

For the INRIA researchers at University of Lille within the Spirals team, this international recognition underlines the open source software ability to fix bugs through automatically generated patches, provided within minutes during the continuous integration and continuous delivery. 

A quote from KTH Professor Martin Monperrus, Repairnator and STAMP contributor, confirms the bot track records. In a few weeks, Repairnator has produced five patches that have been accepted by human developers and merged into their respective code bases: "This is a milestone for human-competitiveness in software engineering research on automatic program repair", he explains.

The online article along with multiple comments also raise unsolved questions about patch legal responsibility and future DevOps careers. 

Sep 20 2018

Mutation testing is a serious game

Automate what?

Thanks to this tweet from Arie Van Deursen, Head of Software Technical Department at TU DELFT and STAMP project partner, we are glad to share with you this online resource where you can learn about mutation testing through a serious game.
Pick your side carefully between attackers who are mutating the software code, and defenders who are adding new tests. And let us know about your gamification experience...

More on code-defenders.org 

Sep 19 2018

Repairnator to repair software bugs on a large scale

Code

Repairnator is an innovative bot to repair software bugs on a large scale, and an open source solution available to all developers now with a STAMP connection!

This development comes from the Spirals team, a joint team between Inria and the University of Lille within UMR 9189, CNRS-Centrale Lille-University of Lille, CRIStAL. More…

Sep 18 2018

Facebook SapFix and Sapienz to find and fix software bugs

Facebook engineers are investigating code automation using Artificial Intelligence in Sapienz and mutation-based fix in SapFix.
Both tools are designed to speed up the deployment of new software through distributed codes that are pre-tested and as stable as possible.
According to a recent article, they are intended for open source release in the future, once additional engineering work is completed, but no date is mentioned. More…

Sep 17 2018

Google Test Efficacy: running software at scale

Peter Spragins, Google Software Engineer and Teaching Assistant at UCSD Math Department, is summarizing almost four years of experience in running software tests at scale, with several colleagues in Mountain View (California).
"The two key numbers for the system's performance are sensitivity, the percentage of failing tests we actually execute, and specificity, the percentage of passing tests we actually skip. The two numbers go hand in hand."
Discover how Machine Learning is now part of the Google process of committing code. Read his article about Efficacy Presubmit Service

Sep 12 2018

Inauguration of Castor Research Center

Castor center Inauguration
A collaboration between KTH, Saab and Ericsson, the CASTOR Software Research Center was inaugurated today at Östermalm (Sweden), with over 50 guests including KTH professors, researchers, industry representatives and employees from the French embassy and Vinnova. 

Prof. Benoit Baudry underlined the aim of delivering outstanding research in software engineering and also expressed his wishes to increase collaboration through more co-developments of open source software tools. The goal is also to increase the number of industry PhD students to run the core research activities of the center, and contributing to reducing the cultural gap that currently exists when referring to software technology between the industry and the lab.

Ingemar Söderquist (Saab) and Diarmuid Corcoran (Ericsson) shared their vision about the challenges and opportunities for software technology in their respective application areas (defense and telecom).

Robert Feldt, Professor of Software Engineering at Chalmers University of Technology in Gothenburg, talked about his experience for setting up collaborations with the industry on software research in Sweden. 

Kristina Höök, Professor in Interaction Design at KTH, presented her insights after having led for more than 10 years the “Mobile Life” research center at KTH.

The official opening was made by Pontus de Laval – CTO SAAB, Dr. Magnus Frodigh – Acting Head of Research at Ericsson, and Prof. Annika Trigell – KTH Vice-President for research, which followed by a reception dinner.

Check out the Castor Research Center inauguration presentations and photos 

Sep 05 2018

STAMP and DeFlacker approach compared

Automate what?

Flaky tests raise a major testing problem in the software industry, in term of performance overhead.

Automatically detecting flaky tests, DeFlaker provides a new milestone in order to cope with them in a principled way.
There is no need to re-run the failed tests anymore. 

In the STAMP project, we follow a similar philosophy: we target major testing problems (missing assertions, crash reproduction) and we invent principled tools that target those problems and that are evaluated on large and complex software projects. These projects are both coming from the STAMP project partners and from international open source members from the OW2 community. More…

Sep 03 2018

Descartes Tutorial at ASE 2018

Place: Montpellier Corum Conference Center
Conference: ASE 2018
Instructors: Benoît Baudry (KTH), Vincent Massol (XWiki), Oscar Luis Vera Pérez (INRIA)

Let the CI spot the holes in tested code with Descartes tool

Bring your laptop, your favorite Java project (with JUnit tests) and find out how much of the covered code is actually specified by the test suite!

In this tutorial, we introduce the intriguing concept of pseudo-tested method, i.e. methods that are covered by the test suite, yet no test case fails when the method body is removed. We show that such methods can be found in mature, well-tested projects and we discuss some possible root causes. Attendants have the opportunity to experiment hands-on with our tool, called Descartes tool, which automatically detects pseudo-tested methods in Java projects. More…

Jul 20 2018

Resolving Maven Artifacts with ShrinkWrap... or Not

Trying to generate custom XWiki WARs directly from the unit tests, Vincent Massol, XWiki CTO, gave a try to the the SkrinkWrap Resolver.
Follow his work on this article about Resolving Maven Artifacts with ShrinkWrap... or Not 

Jun 25 2018

Environment Testing Experimentations

As part of the STAMP Project, Vincent Massol, XWiki CTO, is conducting five testing experimentations, with issues and limitations.
Read more insights in his article about Environment Testing Experimentations  

May 09 2018

Automatic Test Generation DSpot

DSpot is a mutation testing tool that automatically generates new tests from existing test suites. It's being developed as part of the STAMP European research project to which XWiki SAS is participating to.
Read this article from Vincent Massol, CTO of Wiki about DSpot Automatic Test Generation

Nov 17 2017

Controlling Test Quality

How to verify the quality of your tests?
Vincent Massol, XWiki CTO, suggests a strategy for Test Quality Control

Nov 08 2017

Flaky tests handling with Jenkins and JIRA

Flaky tests are a plague because they lower the credibility in your CI strategy, by sending false positive notification emails.

Vincent Massol, XWiki Technical Director suggests a new Flaky test strategy.

Oct 29 2017

Creating your own project's Quality Dashboard

Conference: SoftShake 2017, Geneva

Offered at SoftShake 2017 in Geneva, by Vincent Massol, Technical Director of XWiki SAS, this brand new presentation explains how to use XWiki to create a custom Quality Dashboard by aggregating metrics from other sites (Jenkins, SonarQube, JIRA and GitHub), saving them locally to draw history graphs and sending emails when combined metric thresholds are crossed. More…

Sep 28 2017

Mutation testing with PIT and Descartes

Vincent Massol, Technical Director of XWiki SAS, wrote an article about a recent experimentation with Descartes, a mutation engine for PIT, in the framework of the STAMP project. 

Here's an example of running Descartes on an XWiki module:
Pit_test_report.png

For more information, click on the Pit test report and read the Vincent Massol blog post, published here: Mutation testing with PIT and Descartes

Sep 17 2017

Using Docker and Jenkins to test configurations

XWiki SAS is part of the STAMP research project and one domain of this research is improving configuration testing.

In this article, Vincent Massol, Technical Director of XWiki SAS, suggests a new architecture that should allow XWiki to be tested on various configurations, including various supported databases and versions, various Servlet containers and versions, and Various Browsers and versions.

Aug 30 2017

Mutate and Test Your Tests

by Benoit Baudry

I am extremely proud and happy that my talk on mutation testing got accepted as an early bird for EclipseCon Europe 2017.

We will talk a lot about software testing at the project quality day. In this talk, I will focus on qualitative evaluation of a unit test suite. Statement coverage is commonly used to quantify the quality of a test suite: it measures the ratio of source code statements that are executed at least once when running the test suite. However, statement coverage is known to be a rather weak quality indicator. For example, a test suite that covers 100% of the statements and that has absolutely no assertion is a very bad test suite, yet is considered of excellent quality according statement coverage.

More…

Jun 06 2017

Jenkins Pipelines: Attach Failing Test Screenshot

How to attach failing test screenshot to a Jenkins Pipeline?
Read Vincent Massol, XWiki CTO article about Jenkins Pipelines

May 10 2017

TPC Strategy Check

Read Vincent Massol, XWiki CTO, article about TPC Strategy Check

Dec 10 2016

Full Automated Test Coverage with Jenkins and Clover

Generating full coverage report for a multi-reactor project is a complex task.
Hopefully, Vincent Massol, XWiki CTO, provides a script with clear explanations for that need.
Ready to jump ?
Read his article on test coverage reports generation with Jenkins and Clover