[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: JSON or what???
I would like to work on both at the same time. We NEED a good solid model. But we also need an amazing JSON binding. What we found in TAXII land when we did the JSON binding is that some of the things in the model needed to be tweaked because of what we learned from the Binding.My utopic goal would be:1) We work on the model and get it 75-80% of where we think it needs to be2) Switch gears and flesh out the JSON binding to greater than 90%3) Take lessons learned and things that are weird and feed that back in to the model design.4) Finish the model5) Finish the JSON binding.On Nov 13, 2015, at 17:42, Paul Patrick <email@example.com> wrote:Bret,
I for one am definitely pleased to here your position on a STIX Lite. I think you'll find fairly uniform support from a wide number of people to clean up, simplify, and address specific issues. I believe what a number of us would like to see is that we address this work in the data model first and then work through the Binding. But again I think you'll find people willing to collaborate to find solutions that are covetableTo the broader community
Paul PatrickiSight Partners
Sent from my iPhone
On Nov 13, 2015, at 7:27 PM, Jordan, Bret <firstname.lastname@example.org> wrote:
I am not suggesting we remove things from STIX or decrease the expressiveness of STIX as a whole. Lets just get that on the table...
A few days ago I did elude to the idea of STIX 2.0.0 having stepping stones where people that need all of STIX could keep using STIX 1.2 but for new people (aka new code), we could start smaller and build on it. (and yes it is very possible to do).
However, that idea had some serious push back. I just thought it would be a faster way to get something out the door for those groups that will ONLY ever work with Indicators, Observables, and Sightings and never anything else. And those groups that need everything else can keep using 1.2 until about STIX 2.5 or so when STIX 2.x catches up to the 1.2 branch.
Despite that debate what I am fundamentally asking for, and have been suggesting for a long time is that we clean up some of the complex nested structures that exist in our implementation. In TAXII we did this by flatting out the way the model was represented. It took several iterations to get it right but in the end I think we got it. Thanks to Terry, Jason, Sergey (EclecticIQ) and Mark for really helping to make that happen.
On Nov 13, 2015, at 16:14, Terry MacDonald <email@example.com> wrote:
I wholeheartedly agree that the model needs simplification and tweaking. But I am worried that we are going to head down a path of ‘dumbing down’ STIX to the point that it becomes useless for actually understanding what the bad guys are doing.Indicators aren’t magically created. They are created by smart people investigating what the bad guys are doing, reverse engineering their code, understanding their processes and learning how they operate and what they do. Indicators are made from detailed understanding and knowledge of specific threat actors.Stripping these from STIX effectively means that we just make it harder to create Indicators.CheersTerry MacDonaldSenior STIX Subject Matter ExpertSOLTRA | An FS-ISAC and DTCC Company+61 (407) 203 206 | firstname.lastname@example.orgAgreed. It should be a part of the simplification. We've briefly looked at what a json stix in the current form would look like, not always pretty. Our implied best practices and optionality doesn't always translate well.Sent from my BlackBerry 10 smartphone on the Verizon Wireless 4G LTE network.
If we just convert STIX XML to JSON I think what we will see is very complex looking JSON. While I prefer JSON now, I don’t think the switch from XML to JSON is going to make anyone’s life that much easier.AharonChanged the thread title since the topic changed.
We had several discussions about JSON in the past with no result of a complete STIX implementation. XML to JSON, as a format, can be done. I think we should show the JSON validation mechanism(s) that will be used by the CTI/SC to assure producers/consumers that we can provide means of testing schema/spec conformity.
I am sold on JSON. Is there an argument against JSON? If so, let’s here it so that we can hash through it.AharonThe reason I push for JSON is all of the developers and CTOs I have talked to in various organizations, companies, vendors, and open-source groups always ask for "anything but XML". I then ask what would you prefer, and they all say, without exception "JSON".So a novel idea... Lets give them what they want, JSON and a simple STIX 2.0 model, and lets drive for massive adoption. Our number 1 goal should be adoption followed up by a model that can meet at least 70-80% of the market use cases.Lets get STIX 2.0 support in every networking product, every security tool, and every security broker. Then, as we gain massive adoption, lets iterate and figure out what we need to do solve the problems we are running in to. Lets first get adoption, and I do not mean a few niche groups here and there and one large eco-system. I am talking about every networking and security product on the planet.I want to remove as many hurdles development shops have against STIX. I want to make it so easy for them to adopt it that there is no question of them adopting it. I do not want to see more groups go off and do their own thing or move over to FB's ThreatExchange or OpenTPX.It would be a great problem to have, where we had SO MUCH adoption and SO MANY STIX documents flowing across the network each day that we had to do something to address the load. That would be a GREAT problem to have.Thanks,BretBret Jordan CISSPDirector of Security Architecture and Standards | Office of the CTOBlue Coat SystemsPGP Fingerprint: 63B4 FC53 680A 6B7D 1447 F2C0 74F8 ACAE 7415 0050"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can not be unscrambled is an egg."On Nov 13, 2015, at 11:26, Jerome Athias <athiasjerome@GMAIL.COM> wrote:I do appreciate the let's do it, if it is not a just do it.
For JSON approach, I just would like to see (by facts) what the % of
use cases/requirements it can cover, and when.
2015-11-13 21:17 GMT+03:00 Jordan, Bret <email@example.com>:
John this is really well said.
I feel like we listened to every possible user requirement out there for
STIX 1.0 and we tried to create a data-model that could solve every possible
use case and corner case regardless of how small. The one thing we sorely
forgot to do is figure out what can developers actually implement in code or
what are product managers willing to implement in code.
Lets make STIX 2.0 something that meets 70-80% of the use cases and can
actually be implemented in code by the majority of software development
shops. Yes, I am talking about a STIX Lite. People can still use STIX 1.x
if they want everything. Over time we can add more and more features to the
STIX 2.0 branch as software products that use CTI advance and users can do
more and more with it.
Lets start with JSON + JSON Schema and go from there. I would love to have
to migrate to a binary solution or something that supports RDF in the future
because we have SO MUCH demand and there is SO MUCH sharing that we really
need to do something.
1) Lets not put the cart before the horse
2) Lets fail fast, and not ride the horse to the glue factory
3) Lets start small and build massive adoption.
4) Lets make things so easy for development shops to implement that there is
no reason for them not to
Bret Jordan CISSP
Director of Security Architecture and Standards | Office of the CTO
Blue Coat Systems
PGP Fingerprint: 63B4 FC53 680A 6B7D 1447 F2C0 74F8 ACAE 7415 0050
"Without cryptography vihv vivc ce xhrnrw, however, the only thing that can
not be unscrambled is an egg."
On Nov 13, 2015, at 08:09, Wunder, John A. <firstname.lastname@example.org> wrote:
So I’ve been waiting for a good time to outline this and I guess here is as
good a place as any. I’m sure people will disagree, but I’m going to say it
Personally I think of these things as four levels:
- User requirements
- Instantiation of the data model (XML, JSON, database schemas, an object
model in code, etc)
- Data model
User requirements get supported in running software. Running software uses
instantiations of the data model to work with data in support of those user
requirements. The data model and specification define the instantiations of
the data and describe how to work with them in a standard way.
The important bit here is that there’s always running software between the
user and the data model. That software is (likely) a tool that a vendor or
open source project supports that contains custom code to work specifically
with threat intel. It might be a more generic tool like Palantir or whatever
people do RDF stuff with these days. But there’s always something.
This has a couple implications:
- Not all user requirements get met in the data model. It’s perfectly valid
to decide not to support something in the data model if we think it’s fine
that implementations do it in many different ways. For example,
de-duplication: do we need a standard approach or should we let tools decide
how to do de-duplication themselves? It’s a user requirement, but that
doesn’t mean we need to address it in the specs.
- Some user requirements need to be translated before they get to the data
model. For example, versioning: users have lots of needs for versioning.
Systems also have requirements for versioning. What we put in the specs
needs to consider both of these.
- This is the important part: some user requirements are beyond what
software can do today. I would love it if my iphone would get 8 days of
battery life. I could write that into some specification. That doesn’t mean
it’s going to happen. In CTI, we (rightfully) have our eyes towards this end
state where you can do all sorts of awesome things with your threat intel,
but just putting it in the data model doesn’t automatically make that
happen. We’re still exploring this domain and software can only do so much.
So if the people writing software are telling us that the user requirements
are too advanced (for now), maybe that means we should hold off on putting
it in the data model until it’s something that we can actually implement? In
my mind this is where a lot of the complexity in STIX comes from: we
identified user requirements to do all these awesome things and so we put
them in the data model, but we never considered how or whether software
could really implement them. The perfect example here is data markings:
users wanted to mark things at the field level, most software isn’t ready
for that yet, and so we end up with data markings that are effectively
broken in STIX 1.2. This is why many standards bodies have requirements for
running code: otherwise the temptation is too great to define specification
requirements that are not implementable and you end up with a great spec
that nobody will use.
Sorry for the long rant. Been waiting to get that off my chest for awhile
(as you can probably tell).
On Nov 13, 2015, at 9:17 AM, Jerome Athias <athiasjerome@GMAIL.COM> wrote:
sorry for the others if off-topic.
Remember that a software is good only if it satisfies the users (meet,
or exceed, their requirements).
You can write 'perfect/optimized' code. If the users are not
satisfied; it's a bad software.
"If you can't explain it simply, you don't understand it well
enough.", Albert Einstein
Challenges are exciting, but sometimes difficult. It's about
motivation and satisfaction.
There is not programming language better than an other (just like OS);
it is just you that can select the best for your needs.
I did a conceptual map for the 'biggest Ruby project of the internet'
(Metasploit Framework), it's just a picture, but represents 100 pages
I think we could optimize (like for a maturity model) our approach of
2015-11-13 17:02 GMT+03:00 John Anderson <email@example.com>:
The list returns my mail, so probably you'll be the only one to get my
Funny, I missed that quote from the document. And it's spot on. As an
architect myself, I have built several "elegant" architectures, only to
find that the guys who actually had to use it just. never. quite. got it.
My best architectures have emerged when I've written test code first.
("Test-first" really does work.) I've learned that writing code--while
applying KISS, DRY and YAGNI--saves me from entering the architecture
stratosphere. That's why I ask the architects to express their creations in
code, and not only in UML.
I'm pretty vocal about Python, because it's by far the simplest popular
language out there today. But this principal applies in any language: If the
implementation is hard to explain, it's a bad idea. (Another quote from the
Zen of Python.) Our standard has a lot that's hard to explain, esp. to
new-comers. How can we simplify, so that it's almost a no-brainer to adopt?
Again, thanks for the article, and the conversation. I really do appreciate
From: Jerome Athias <firstname.lastname@example.org>
Sent: Friday, November 13, 2015 8:45 AM
To: John Anderson
Subject: Re: [cti] The Adaptive Object-Model Architectural Style
Thanks for the feedback.
Kindly note that I'm not strongly defending this approach for the CTI
TC (at least for now).
Since you're using quotes:
"Architects that develop these types of systems are usually very proud
of them and claim that they are some of the best systems they have
ever developed. However, developers that have to use, extend or
maintain them, usually complain that they are hard to understand and
are not convinced that they are as great as the architect claims."
This, I hope could have our developers just understand
that what they feel difficult sometimes, is not intended to be
difficult per design, but because we are dealing with a complex domain
that the use of abstraction/conceptual approaches/ontology have benefits
Hopefully we can obtain consensus on a good balanced adapted approach.
2015-11-13 16:24 GMT+03:00 John Anderson <email@example.com>:
Thanks for the link. I really enjoy those kinds of research papers.
On Page 20, the section "Maintaining the Model"  states pretty clearly
that this type of architecture is very unwieldy, from an end-user
perspective; consequently, it requires a ton of tooling development.
The advantage of such a model is that it's extensible and easily changed.
But I'm not convinced that extensibility is really our friend. In my
(greatly limited) experience, the extensibility of STIX and CybOX have made
them that much harder to use and understand. I'm left wishing for "one
obvious way to do things." 
If I were given the choice between (1) a very simple data model that's not
extensible, but clear and easy to approach and (2) a generic, extensible
data model whose extra layers of indirection make it hard to find the actual
data, I'd gladly choose the first.
Keeping it simple,
 The full wording from "Maintaining the Model":
The observation model is able to store all the metadata using a
mapping to relational databases, but it was not straightforward
for a developer or analyst to put this data into the database. They would
have to learn how the objects were saved in the database as well as the
proper semantics for describing the business rules. A common solution to
this is to develop editors and programming tools to assist users with using
these black-box components . This is part of the evolutionary process of
Adaptive Object-Models as they are in a sense, “Black-Box” frameworks,
and as they mature, they need editors and other support tools to aid in
describing and maintaining the business rules.
 From "The Zen of Python": https://www.python.org/dev/peps/pep-0020/
From: firstname.lastname@example.org <email@example.com> on behalf of
Jerome Athias <firstname.lastname@example.org>
Sent: Friday, November 13, 2015 5:20 AM
Subject: [cti] The Adaptive Object-Model Architectural Style
realizing that the community members have different background,
experience, expectations and use of CTI in general, from an high-level
(abstracted/conceptual/ontology oriented) point of view, through a
day-to-day use (experienced) point of view, to a technical
(implementation/code) point of view...
I found this diagram (and document) interesting while easy to read and
potentially adapted to our current effort.
So just wanted to share.
To unsubscribe from this mail list, you must leave the OASIS TC that
generates this mail. Follow this link to all your TCs in OASIS at:
DTCC DISCLAIMER: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify us immediately and delete the email and any attachments from your system. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]