Evaluation in Action: Stories from across ESIP

Abstract/Agenda: 

ABSTRACT

Building on newly acquired knowledge by Evaluation Workshop participants and others, this session will feature speakers who will share their experiences in project, program and organizational evaluation.

 

AGENDA

8:30 - 8:45 Using Logic Models as a Framework for Planning, Implementing and Evaluating Research

William Michener, College of University Libraries & Learning Sciences, University of New Mexico

Grand challenge research is becoming increasingly complex and often involves teams of scientists from many disciplines as well as the engagement of educators, communication and cyberinfrastructure experts, and professional evaluators.  This talk uses several examples from successfully funded environmental and cyberinfrastructure research projects to illustrate how logic models can be used as a framework for designing research projects as well as supporting strategic planning, evaluation and assessment, and reporting.  Exemplar projects include DataONE (a cyberinfrastructure project), a sustainable energy development project, and a project to create a citizen science organization. 

 

8:45 - 9:00 Survey techniques for managing change in earth science information systems

Jonathan N. Blythe, Scientific Data Manager, Environmental Studies Program, Bureau of Ocean Energy Management, U.S. Department of Interior

Federal agencies are responsible for a variety of earth science information systems that need to be maintained and updated to keep pace with changes in technology and society.  These changes can be mission critical when they affect customer experience, or if they involve changes to the business practice that affect operations.  Surveys may help organizations evaluate existing practices and manage organizational change.  Discussed here are two examples, an interagency effort to assess community information needs, and the initial implementation of an agency wide Open Data Policy.  Community based approaches may be designed on respondents’ shared background.  This is not the case if stakeholders come from several subject matter areas, introducing complexities in how survey questions are worded and their interpretation for analysis.  However, survey respondents from across an agency share an organizational context that may be used instead to contextualize survey responses.  Discussion will focus on benefits and shortcomings of Internet based survey tools, like Google Forms and Survey Monkey.

 

9:00 – 9:15  Ongoing Evaluation to Inform the Development of an Online Resource Center

Robert R. Downs and Robert S. Chen, Center for International Earth Science Information Network (CIESIN)

Evaluation activities can identify opportunities for improvement when developing systems, programs, and services to address the needs of diverse stakeholders. The results of evaluation efforts also can initiate organizational learning and enhance understanding to attain current and future objectives and goals. We describe how multiple evaluation efforts that employ different methods have been used to inform the development of a resource center to support the preservation of geospatial data and related resources.

 

9:15 – 9:30 Evaluation Primer for Scientists, Based on a True Story

Ana I. Prados, University of Maryland Baltimore County and NASA GSFC

This talk will outline how one scientist developed and implemented a program evaluation for NASA's Applied Remote Sensing Training Program (ARSET) which builds capacity for  decision makers to access and use remote sensing observations. I will discuss how we educated ourselves, realized it was NOT rocket science, found an evaluator, and developed a full evaluation plan. I will also describe our  journey in crafting a Logic model and evaluation instruments (surveys and interviews), which have now become an integral component of ARSET. Lessons learned and tips for getting your own evaluation going will also be discussed.

 

9:30 – 10:00 Discussion

Notes: 

Ana Prados: Session will highlight examples of how logic decision-making models were used successfully.

----- First Presentation -----
Ana introduces Bill Michener from the College of University Libraries and Learning Science at the Univ. of New Mexico.

Increase chance of getting funding when writing proposals: SMART ideas.
Logic model approach: Not just what you want to do but why is it important? Who does it benefit?

Example: impact of climate change on mountain hydrology at Univ of New Mexico
Logic model helped unify several teams tackling different parts of the problem

Other ex: Developing proposal to NSF for citizen science
Other ex: DataONE. Develop logic model for each project component to see what stands up to scrutiny.

Review panels want to see your logic model converted to text
Idea: use tables with time blocks to create visual aid for reviewers

Tips for writing proposal writing:
-Read the RFP at least 3-4x
-Use a matrix: "Create and Compare Against a Requirements Matrix (or Compliance Matrix)
--One column with Requirement from RFP. Also include review criteria; One column with Comments. Say how requirement was responded to in project description, what else needs to be done, etc.; One column with Proposal Reference (Page #, section or attachment #)
-Assessment and evaluation table: One column with Strategies by component; Column with corresponding output metrics; Column with block timetable; Column with Outcomes and Metrics;
-Track and manage time graphically/visually.

----- Second Presentation ------

Ana introduces Jonathan Blythe, Scientific Data Manager for Environmental Studies Program through Bureau of Ocean Energy Management to present "The data inventory survey"

Focus on tools/methods used to collect data inventories
Use case: data inventories as a data policy tool

Programs being assessed:
1) Biological Integration and Observation Task Team BIO-TT
2) Dept of Interior, Open Data Policy Tasks

Evaluation methodology:
-Considerations: Selecting survey respondents, crafting survey design, plan how responses will be used
-Example 1: BIO-TT project: used SurveyMonkey to collect responses and guide efforts
-Example 2: DOI ODP: Used Google Form to collect responses and identify need areas. Places responses into a spreadsheet. Publish results online

Results of evaluation:
-Representative sampling of data generated/needed
-Qualitative analysis

----- Third Presentation -----
Ana introduces Robert Downs from NASA Socioeconomic Data and Applications Center SEDAC "Ongoing Evaluation to Inform the Development of an Online Resource Center"

Why is Evaluation important?
-Assess efforts
-ID corrections needed
-Measure progress
-Learn and improve understanding of efforts
-Demonstrate success and failure to stakeholders

Multiple Evaluation Methods:
-Provides several perspectives on evaluations
-Evaluation during various states of project
-Conduct internally, w/stakeholders or independently

Examples of SEDAC evaluations:
-Internal audit (Trusted Repository Audit and Certification document)
-External audit (ISO 16363 by Primary Trustworthy Digital Repository Authorization Body

Evaluation of Geospatial Data Preservation Resource Center
-Project to develop and manage an online resource center for geospatial data stewardship

Pre-development Evaluation
-Survey user community expectations through online survey

Improving Planning Process:
-Fluid Design and Implementation Plan: modified with each survey response
-Multiple drafts: continuous review for potential improvement; Revised "early and often"

Improving website development:
-Content categories, resource types, etc.
-Prioritize new capabilities, features and enhancements
-Independent company to evaluate website

Ongoing improvement:
-Requesting recommendations for resources through an online submit form
-Monitor web metrics

Questions:
-ISO 16363: audit and certification of digital repository. How do we ensure systems are trustworthy? Looks at management of site/system, organization that runs system, management of digital objects w/in system and security of the system.
-How to survey? Partner with universities for conducting surveys.

----- Fourth Presentation -----
Ana Prados from Joint Center for Earth Systems Technology at Univ of Maryland: "Evaluation Primer for Scientists: Based on a True Story"

ARSET: Applied Remote Sensing Training Program

Gradual learning approach:
-Basics: Webinars and hands-on
-Advanced: Hands-on, tailored course

ARSET serves as a match-maker between NASA PIs and End users

Motivation for ARSET Program Evaluation:
-Improve program, better results
-Communicate impacts/outcomes to funders
-Improve value to stakeholders
-Show success stories

ARSET startup: Educate selves on project evaluation/logic models; Hire professional evaluator; Build logic model that described how ARSET works.

Logic Model is a living document that allows you to evaluate at any point in the model.

Session ended by Ana at 10:30am

Citation:
Prados, A.; Evaluation in Action: Stories from across ESIP ; Winter Meeting 2014. ESIP Commons , November 2013