OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

sarif message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: potential SARIF extension/moving forward standard (STICLA) for analysis behavior/interfaces

Hello, everyone,


I’ve reached out already to some people on the committee on this and wanted to share with the broader group to assess interest.


Microsoft is considering additional investments (beyond SARIF) in standardizing modern static analysis tools. We have a tentative name of ‘Standard Interface for Command-line Analysis’ (STICLA) for the proposal. It covers analysis tool command-line interfaces, inputs, runtime behaviors. Basically a standard to cover the ‘ins’ as well as the ‘outs’ (which is SARIF). We suspect this effort might result in some changes or extensions to SARIF. Certainly the SARIF concept of analysis ‘policies’ and taxonomies would be explored (as part of understanding whether these concepts can be used, for example, to define a standard min-bar file that can be maintained as code, deployed in context of a CI/CD pipeline, etc.).


My sense is that most of us who’ve thought about this acknowledge the need/complexity of this, but are less optimistic about the possibility of modifying tool behavior to conform to the standard (as opposed to modifying tool outputs, which seemed, and has proven, an easier sell). There is also, I think, a broader population of/interest in consuming a standard tool log than interested in driving multiple tools in a common environment.


Still, the need internally at Microsoft is clear. Teams in our company have very few cycles to configure/enable new analysis. This is an inhibiting factor on tools development (for which we have ongoing needs). So we have a clear goal of attempting to enable ‘free’ onboarding of emerging tools. Our strategy has been to create an analysis framework which provides a standard command-line, but which also is mature and hardened in terms of logging, threading,  other reliability concerns. This frees tool developers to focus as far as possible on code that’s related to inspecting/reporting.


We’re at a fork in the road: if we find significant interest elsewhere in partnering, we’ll start a new standard design process. If support looks uncertain now, we will continue to develop a sort of v1 internally, which may lead to a future standards effort.


If anyone has comments or feedback, if only to weigh in on the value of/need for this, I would appreciate hearing from you.





Teams/systems with a stake in plug-in/tool configuration/behavior at Microsoft/GitHub include:

  • Analysis tool-executing harnesses including GitHub’s static analysis workflows (DSP),  Microsoft’s Guardian. These systems are responsible for installing/configuring/executing multiple static analysis results and post-processing their (now standardized as SARIF) outputs. Microsoft’s internal systems in particular depending on genericizing/aggregating multiple tools into a single workflow
  • Advanced build systems such as Build XL support integrating arbitrary tools into the build. Tool behaviors can have a significant impact on the performance and reliability of these systems. Deterministic outputs may maximize cache utilization, for example. Unexpected behavior differences based on reading registry settings may compromise reliability. Etc.
  • Other static analysis results mgmt. systems including IDE environments, work items, etc. User would benefit from common experiences across the range of tools, such as a common mechanism for configuring a static analysis min-bar.


The STICLA standard would design (and provide a reference implementation for) a standard command-line/REST API interface tools to implement and further specify a range of tool behaviors relevant to interoperability:


  1. Analysis/Results Production. STICLA will standardize arguments and argument evaluation behaviors for tools.
    1. How are input targets specified?
    2. How is recursion into sub-directories or not specified?
    3. How is threading/parallelization specified? How are outputs specified (should existing files be overwritten?
    4. Logs pretty-printed?). Etc.
  2. SARIF/log file richness/normalization. Naturally all STICLA tools will emit SARIF. SARIF is an over-engineered highly expressive format. STICLA tools will populate SARIF logs sufficient to effectively drive advanced results mgmt. scenarios, e.g.:
    1. Provide for code snippets and other data to drive issue investigation directly from logs
    2. Provide actionable user-facing strings.
    3. Provide version control mapping for all files
    4. Persist unhandled exceptions, configuration errors, etc., in their SARIF ‘slots’ to assist in troubleshooting infrastructure issues that resolve to tool/execution or config breaks.
    5. Provide exhaustive data, where necessary, to support compliance/auditing scenarios (e.g., persisting all analysis configuration details to validate the min-bar enforced, emitting comprehensive scan file data to all targets scanned, etc.)
  3. Run-over-Run Results Management: STICLA will specify fingerprint/other baselining behaviors and formats, to allow for standardized run-over-run management of logically equivalent results.
  4. Rules Metadata/Docs. STICLA will require tools to emit rules metadata in a structured form supporting:
    1. Auto-generation of help topics
    2. Auto-generation of min-bar/configuration files to drive execution or filter results
  5. Capabilities/versioning discovery. STICLA analyzers will return responses to well-known discovery commands that allow aggregating infrastructure to:
    1. Understand what STICLA standards/practices the analyzer supports
    2. Understand tool changes version-over-version (to assist in upgrading/servicing tools without breaking users due to behavioral changes such as new rules, false positive fixes, etc.)
  6. Runtime behaviors. STICLA will specify runtime behaviors that drive optimized execution/results processing and which prevent breaks/other reliability problems. Some of these behaviors are advanced features (which a STICLA app will document as supported or not via capability discovery API).
    1. Deterministic output: Deterministic output drives high cache rates and eases diffing run-over-run. Analysis behaviors should not differ based on environment/registry/other settings but proceed directly from (and only from) command line args.
    2. Reliability/Negative condition handling: STICLA tools s/be hardened to reliably capture/report on unhandled exceptions and other conditions and maximize processing in these cases. E.g., an unhandled exception in a specific rule s/be captured and reported, with all other checks completing their analysis.
    3. Scalability. STICLA tools s/be designed for parallelism and distribution. Tools should not rely on having all instances running in a single machine. Tools should not assume only a single instance is running in a build environment. Tools should support throttling-related arguments in execution interface. Etc.
    4. Serviceability.
    5. Standard tool exit codes for success with no analysis results, success with analysis results, and tool failure.


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]