Tracking API changes per code branch

(Kmeister) #1

Is there a preferred workflow for multiple changes to an API which is in-development?

My example is:

  1. Developer A, is building feature A which needs its own tests.
  2. Developer B, is building feature B which needs its own tests.
  3. Each codebase A and B can be tested using the correct version.
  4. When the features are merged together, both sets of tests can be run.

Any ides on how to do this with stoplight would be most welcome.

(Marc) #2

Hi @kmeister, great question!

Short Answer

Short answer is that we are working towards a specific workflow, but not all of the pieces are in place to support it quite yet. For now, we would recommend creating a scenario collection file for each set of independent tests/feature, and then either merging them together when you are done, or simply triggering both collections when testing.

To merge them together, you can switch to code view and copy all of the scenarios in file A, then paste them in the code view of file B.

This is a temporary measure until we get the full versioning system in place.

Long Answer

Under the hood, projects in Stoplight are git repositories, and git serves as the foundation for the versioning workflow that we are working towards. We’re building a lightweight UI around git to help manage and merge multiple versions from within Stoplight (or, if you prefer, you can of course use whatever Git tooling you want to help manage the merging/branching etc).

In fact, you can already see indicators of the git underpinnings in our URLs (see master version below):

or in features like the change history (which pull data from the git commit history).

The UI we are building is not meant to be a full and complex git client, but rather a simpler set of features to help all personas (from product manager to technical writer to developer) collaborate around versions inside of Stoplight. However, because we’re on git, those with technical chops who want to drop down to lower level git operations can do so, while product managers etc use the lightweight Stoplight UI.

I hope this helps to answer your question, and provides some insight into where we’re headed! Let me know if anything isn’t clear or if you have other ideas or suggestions :slight_smile:.

(Nicolas Tisserand) #3

Hi @kmeister,

In addition to Marc’s answer, there is a first condition to be fulfilled before testing : the Swager must contains both of the features A and B.
This point is not always easy to deal with. For example :

  • if A is about “books” and B is about “users” with no link between them : no problem, the Swagger can be merged and you can follow Marc’s advice.
  • if A and B concerns “books”, and B needs to change the “books” models, then you are more likely talking about API versioning and it’s more difficult. Depending on the complexity level of your project, you’ll probably have to stage the deployment of your features, manage the migration of the consumers, and so on.

(Kmeister) #4

These are good points, @marc @ntiss

Our system is more like what Nicolas describes, which is what tracking scenario changes locally gives us. The tests move along with the API code of that feature release.

For 2 test scenario files Marc describes maybe what we need to do. We have had major conflicts with automatically merging the scenario file, but multiple separate files would be ideal.

If prisim did correctly support local shared utility methods we would be in a lot better position as each feature would be able to be in its own json. Right now we have a monolith scenarios json, its quite overwhelming if not for the Stoplight app.

(Marc) #5

Interesting, thanks for the responses everybody. We’ve been playing around with an idea internally, would love to get some outside opinions on it:

  • Pre-requisite: Robust project folder + file management instead of the flat list of files we currently support, we are already working on this issue.

  • New: new file type: {whatever}.scenario.yaml. Note the singular scenario extension instead of scenarios. A scenario file would describe a single scenario (which can have many steps).

  • New: Run all scenario files that are inside a project folder. This would include nested folders. So you could run an individual scenario by running a single file, a group of scenarios by running the parent folder, all of the scenarios in a project by running at the top most folder, etc.

  • New: Prism from command line would support targeting a folder. It would run all of the scenario files that are inside of that folder.

Can be difficult to visualize without the designs, but hopefully you get the idea. Thoughts @ntiss @kmeister?


@kmeister we actually have a new $ref resolver that is live in production that supports local file references. All new $refs are now relative by default instead of using absolute urls, ie ./utils.scenarios.yaml instead of It was a major piece of the puzzle that we had to solve before we can bring back the desktop local file editing features in the way we want. There are a couple more things to solve, then we’ll bring back a new and much more robust local desktop editing experience!

(Nicolas Tisserand) #6

Hi @marc,

Awesome, it sounds great !
However, I’ve some remarks about the OAS coverage score.

Currently, the coverage score is only updated when the whole collection is run.

If I correctly understand your proposition, the “collection” disappears and is replaced by folders.
Where would be displayed the coverage score and how will it be computed ? if I run only one scenario ?
Will it be possible to pick up only some scenario files ? or will I have to put them in the same folder (just like a collection) ?
Perhaps is it a good start to provide a table driven testing someday ?

Anyway, it will change the habits of the users. You’ll have to create a good migration work to split properly the collections and keep the same url in the exporter (to not break the existing tests with prism from commandline).

Very ambitious, I wish you the best for this.

(Marc) #7

One of our goals is to store and show the latest coverage / test results for each project in the organization dashboard. This means we have to start storing the test results and coverage numbers (instead of generating them on demand when you run your scenarios, and then throwing the data away).

Our latest thinking on how to achieve this, which also addresses your coverage question above:

  1. We know how many “coverable things” there are in your project, by examining how many open api files and api operations there are, etc.
  2. We store coverage statistics in a file in the project. Each of the things in step 1 are added to this file, with some data about wether they have been tested and what the last result was.
  3. When you run a scenario step, scenario, or scenario collection, we update the coverage statistics file to indicate “x y z” operations passed tests and are covered.

If you run all the scenarios in a project at once (running the collection), the entire coverage file is basically re-computed. If you run just a single step, only part of the coverage file is updated/added. This means you can run one scenario at a time, and end up with the same coverage result as if you ran the entire collection at once.

Does that make sense?

All of these ideas are early, we’re not implementing any of this yet! Just gathering feedback and seeing if they make sense to work on in the coming year.

(Kmeister) #8

The idea of separate folders seems easier in our current workflow. That combined with the ability to reference utility scenarios in other folders would be amazing.

For the OAS coverage score, I’m actually embarrassed to say we’ve not been maintaining the oas2 files well on our project, we’ve had difficulty with a couple of portions, but I’ll split that conversation out into another thread.

(Marc) #9

We’ve put together our latest thoughts on version + release management here:

It also includes a screenshot of the in progress UI. Welcome any feedback, still have time to make adjustments as needed!

(Nicolas Tisserand) #10

Hi Marc,

I have not yet read all your (long) text, but the most important part is :

be simple to understand and use for users that don’t know git

Thanks for thinking to the business users !! (product owners or product managers)

(Marc) #11

Absolutely! We’re taking a number of steps over the next 6 months to make the platform work more intuitively for non developers. I think Ross already mentioned this to you, but we hope to have the main ideas ready to present to you and others for feedback soon.

(Kmeister) #12

I’m on the fence about what features a non-developer would perform. On my projects, I’ve never had a non-developer define an api or look at documentation to understand how to use an api.

(Erik Hansen) #13

@kmeister I agree that non-developers typically will not be defining APIs. However, non-developers can be a part of the API design, test, and doc process in more of a review capacity. Here is how I envision some of our non-developer users interacting with StopLight.

  • Technical Docs Team:
    • Review descriptions within the model
      • Must have some way to comment or suggest changes inline
    • Create or modify Markdown pages for things like Getting Started and Tutorials
      • Again, business user friendly edit/review tools helpful here
  • Test/QA Team:
    • Define or revise scenarios
      • QA users are likely comfortable with Git, but having review capabilities built into StopLight would be helpful
  • UI / Consumer Dev Team:
    • Make suggestions for design change to meet needs of the consumer
      • Since our API design should meet consumer needs, they should be able to review and post suggestions on response schema changes
      • A pull request and/or issue tracking sort of model would fit in perfectly here

@marc I am looking forward to the upcoming changes that can help make this a more collaborative environment for all our users. Thanks!

(Marc) #14

Agree with what @erik.hansen outlined. What we are finding is that responsibility for api (product) requirements / design side of things is getting pushed left towards PMs and other non developer roles (still technical, but not git wizard type of technical like a developer). @ntiss might have some insights into how their PMs drive the process.

@erik.hansen we’ll go into more detail on our call, but high level is we’re exploring a two system approach. Discussions and tasks. Discussions are very lightweight, realtime, and easy to attach to various files and parts of files (like a specific endpoint in a spec). Tasks are more akin to github issues - a bit heavier and meant to track goals over time.

Discussions: think “realtime chat”, but with features to make discussing specific parts of the project or spec easy.
Tasks: think “issue tracker”, but specifically built to organize design, docs, and testing related work.

(Kmeister) #15

Its cool you’re so responsive to community requests. I’m still wondering if most teams already have collaboration tools which fill the needs you describe. - Open source through GitHub issues, and a private company definitely through jira or other tasking apps.

(Marc) #16

A good point - however, both Github issues and Jira are pretty generic / all purpose issue trackers. From the surveys that we’ve done, a collaboration solution more specifically tailored to the API lifecycle process has value.

The tasks side of things is actually being built on top of gitlab / github issues as the backend, so that all the stakeholders can still participate wether they are a developer in Github or a technical writer in Stoplight. We’re building for a specific domain (API lifecycle management), and so can tailor the solution more appropriately than a generic option like Github issues. Hope that makes sense!