This changes comments to only be strings passed around between the
codegen package and its clients. This lets codegen in the future limit
how long a comment line is when generating the code.
- Organize manager function generation into one helper method
- Vocabulary name is passed into the type & property generators
- Use interface only in the manager
- Remove unused flags in the main program
The manager class will be responsible for allowing the generated code to
be compilable while also permitting types and properties to be isolated,
such that binaries can be pruned to smaller sizes and not require the
entire gambit be built into the resulting executable.
This state will successfully generate code, but the generated code is
completely uncompilable. It will also trash the props/ directory.
Still plenty of missing features, and missing implementations in the
generated code. Also missing some functionality and flags for generating
references and/or well-known references (ex: XML, RDF values).
Still need to flesh out the types for conversion. Also still need to add
the serialize and deserialize calls for individual types. Finally, will
need to put the finishing touches on writing the output files in the
desired directories. Then the experimental tool will be ready for end to
end testing.
Also begin populating values in the intermediate definition.
TODO: Replace the hack in the spec definition with something applicable
RDF-wise (is there anything that permits this RDF wise?).
- Fixed the way indexing nodes were being applied
- Implemented property types in ontologies
- Improved class types in ontologies
- Lots of other stuff
Also adds some placeholders for the schema ontology.
Eliminated some dead code in the RDF manager thing.
Next step is to get mainEntity parsing to ignore the rest of the values,
which should be easier to do, maybe.
Next, the actual nodes need to be created in order to construct the
proper intermediate form and translate the parsed data into a meaningful
structure that can be used to generate code.
Ideally, this could also potentially allow code generation in other
languages too. And different ways to read in ActivityStreams
specifications and extensions. But that would be way off in the future.
This is an evolution of the old tools/defs/types.go file which
essentially had these data structs manually defined in a static go file.
Now, the parser should be able to construct this data structure on the
fly from input files and reference files.
This package will be the frontend for reading the JSONLD context
descriptions that specify ActivityStreams vocabularies. This will allow
ingesting publicly hosted or manually-created vocabularies and
generating an internal representation for later code generation.