Harness: A New Development and Testing Environment for Big Data, Algorithms (and anything else)
Harness is a generic environment designed to handle easy loading, parsing, and evaluation of complex software projects and project groups. Large-scale systems consisting of multiple languages and intricate computational routines can be loaded into Harness as components and parsed into internal and external functions that can in turn be combined into workflows to be evaluated both in execution metrics and workflow state. Evaluations with matching output types can be compared against each other for differences as projects evolve over time. Harness uses a generic style that exposes system components as web enabled microservices, making it a good candidate for modernizing and standardizing all sorts of code bases.
Harness was developed in concert with a reengineering effort of the pairwise homogeneity algorithm at NOAA's National Centers for Environmental Information in Asheville, North Carolina, and is a perfect match for the complex climate data domain.
This will be a general overview of the concept of the open source Harness software system, examining its tenets and potential utility in Big Climate Data. We plan on having the engineers who worked on the pairwise homogeneity algorithm reengineering available to discuss a real world application of the system.
We are also hoping to hear feedback and features that would be most useful in future development, to see where the project should focus its development resources.
For more information and a more detailed look, the white paper on the project is available.
Always looking for collaborators to help write language drivers, meta-workflows for machine learning, and other enhancements.
Contact the principal for more information:
[email protected]
Please see attached powerpoint, white paper, and poster for more information.
(Following notes added by attendee)
Harness: A Development and State Testing Framework
Domain characteristics
- Multiple languages and project dependencies
- Project duplication
- Diverse array of tools
- Large amount of documents
- Full stack production (design, collection, analysis, production, maintenance)
- The scope, breadth and depth of data issues
Domain trends:
- Technology -> more options
- Stakeholders -> more demanding
- Algorithms -> more complex
- Funding -> relatively stagnant
- People -> increasing specialization
- Obligations -> increasing
Our focus: data/expertise/technology diversity
A.pros: better breath and depth; deeper understanding; more options to pick the best tools
B. cons: more noise, processing, deeper specializations; more hands in the post , mismatch, computing
Unmanaged complexity:
Product management: Version control: code-manual, git…; Development: trac, fogBugz, JIRA; Deployment:
Product Development Ideal:
Some changes in Team A and B
Why Ideal isn’t:
- No team develops in a bubble;
- The ‘best’ tools are arbitrary;
- Nothing escapes change
- Documentation is hard;
- Datasets are large and complex
Potential for improvement:
- Similar things should be interchangeable and technology agnostic;
- Product changes should be documented
Improvement without adding complexity
Harness concept: flexible lightweight
Purpose: controllable and predictable; emphasizing functionality through constraint
Taken the astronaut as an example: Solar system as a harness, sun is the algorithm package; Component functions - internal; component functions - external;
Functional workflows:
Language translation: communicate among python, java, fortant
Accessible states: a retrievable record of all states through time
Concept to Design
- Design tenets, abstract and generic framework-> try not to make assumptions about the project
- Seamless and comprehensive utility -> automatically add DRY functionality without obfuscation
- Accommodation for development -> promote flexible development strategies
Harness architecture:
- Status:
- Component: local copy on disk
- Adding a function:
- Functions in Harness: functions contain many sub-functions
- A workflow in Harness: function->inputs->(metadata, order, required, source, target)
- Adding an evaluation: choose workflow -> define input sources, and optionally define output targets
- Structure architecture overview:
- structure template provides translators between each other.
- Add a structure (group metadata); add a template (metadata structure); add a field (index, template, metadata, type, order, required);
Case study: PHA practical application
Questions: more details about the evaluation part: it heavily depends on the use case.
UI is not a problem because Angular and D3 are used.