[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [sarif] partialFingerprints: the words the world has been waiting for
Thanks for reading! Let’s put a pin in your “identifier object” question for a moment so I can ask you:
Now as to “identifier object”… as you say, it’s not useful for result.id or result.correlationId (because nobody’s going to generate a human-readable equivalent for those). As to run.automationId and run.stableId – I don’t see the point of a GUID associated with the namespaced human-readable values for this properties. GUIDs are fine when you have to guarantee uniqueness and there’s no central authority. IMO, within any given team’s engineering system, there would be no danger of choosing two identifiers with the same human-readable name, but different semantics requiring them to be distinguished – because you have a central authority. The complexity/value trade-off doesn’t work for me here, but I’m open to persuasion.
Larry, I read your proposal around id and correlationId and it is clear and makes very good conceptual sense.
These things are easy to describe in the spec: an id is a guid, generated on the fly, which is an identifier that is only good for a specific result in a single log file. The correlationId is a guid that correlates logically unique instances of a result across multiple log files.
Btw – I am wondering whether we need an identifier object, which explicitly contains a guid, a readable id (which is an arbitrary namespaced label that provides some hierarchy, and a description). This id object would work well for automationId and for stableId. For id or correlationId, looks less helpful. We might consider renaming these to result.instanceGuid and result.correlationGuid.
Of course, we could dispense with result.fingerprints and just keep result.correlationId, documenting that it could either be an arbitrary identifier or a calculated fingerprint value.
I’m still not quite clear on whether correlationId would need to be plural in that case.
Yekaterina, thanks for the explanation. Since my memory is usually so poor, I was pleased to find that I remembered most of what you just wrote 😊
Let me merge threads, pasting in my response to Michael:
You are saying that once you have decided that a result in today's build is logically the same as a result in yesterday's build, there's no need to persist a “fingerprint” that essentially captures the result of your comparison. Instead, you just stamp the two “logically identical” results with the same id.
If we settle on this model, I suggest that we shouldn’t use result.id for this purpose. Instead, I would introduce a new property result.correlationId. Every single result in every single run would have a unique run.id. Otherwise, a result management system could store only one of a set of “logically identical” results.
I would modify your step 5 as follows:
5) For each result in the current run: if it does not match a result in the baseline run, generate a new GUID and assign it to result.correlationId. If it does match a result in the baseline run, copy the baseline result’s correlationId to the new result. In either case, update result.baselineState in the current result appropriately.
I think I can tie this all together. There are really three concepts:
Depending on the RMS, either #2 or #3 might be used. Michael’s vision uses #2. SCA uses #3, but they call it “instance id” rather than “fingerprint” – and, they have a system for allowing users to say “these two results with different instance ids are really the same”.
I suggest that SARIF should support both usage patterns. Adding run.correlationId supports Michael’s suggested pattern. Keeping run.fingerprints supports SCA usage. SCA could, if they chose, store their “instance id” in run.correlationId, but it would be a slight abuse of the semantics I proposed above.
Just want to chime in to explain what we do at Fortify. In SCA, our instance ids (result ids that allow us to track what’s happening to the result over time) is the “fingerprint” calculated based on a complex algorithm that takes into consideration various things like file names, sources/sinks involved in generating this result, ruleids involved etc. The idea is that, no matter where we run the tool and how many times, if the code hasn’t significantly changed and the result did not get fixed, the same exact instanceid gets generated for the same exact issue.
As mentioned, this is a complex algorithm, which sometimes fails to generate the same exact insatnceid for various reasons, and so our results management system tries to correlate results from multiple scans to indicate that something might be the same exact result as generated before. The user of the system has to verify that he/she agrees with these correlations. However, we never assign a different instance id to an already generated result.
So, to me it looks like we would only be using the id property of the result object, and neither use fingerprints or partialFingerprints properties.
Do let me know if I’m missing something.
My thinking which I tried to articulate in today’s discussion, more or less successfully, is that result matching is not a matter of comparing a previously computed fingerprint to another. Instead, result matching is a complex algorithm that tries to stitch various results together. If unsuccessful in producing an exact match, the algorithm may fall back to partial fingerprints, which are essentially logical- and physical-location-free things that may still help determine issue identity (in practice, a result matcher might still have a notion of two files that should be compared for the match, but have lost all other useful intra-file location details).
With the definition above, a partial fingerprint is partial in the sense that it is a speculative match that doesn’t benefit from other data that would increase confidence in a match. It is also a contribution, as per our previous definition, in the sense that you might try to glue this information to whatever else you have (such as a file name, where you’ve lost the location details).
I think the most significant impact to the reorientation above is how we think of result.fingerprints. This data now truly becomes mostly a placeholder for putting data produced by legacy formats. We wouldn’t expect fingerprints to be populated by a result management system. Instead, this is what we’d see:
And that’s it. At no point does it seem critical to populate the fingerprints object. You could imagine the fingerprints of the baseline log file containing some fingerprints that will always match if file name + physical location details haven’t changed. But how useful is that? (we already have file hashes to tell us this). If you have to diff two files anyway to overcome line churn, the extra work of prepopulating and storing fingerprints might not provide cost ROI.
For a long time we’ve agreed that partialFingerprints shouldn’t include information that’s deducible from the SARIF file, but the spec has never said so. As part of the “fingerprints” draft that I just merged and pushed, Appendix B now says the magic words:
An analysis tool SHALL NOT include in partialFingerprints information that a result management system could deduce from other information in the SARIF file, for example, file hashes. Rather, the result management would use such information, along with partialFingerprints, in its computation of fingerprints.
I understand that our vision of partialFingerprints is still evolving, but this will do for now.