Wednesday, May 2, 2012


Discussions and information gained since last conference call.
  • ·         Conference call with Delbert Elliott – interested in possible contribution (Call on April 27th)
  • ·         Meeting with Lance Till (April 27th)
  • ·         Conference call with Abigail Fagan – interested in possible contribution (Call on April 30th)
  • ·         Alan Brown and Gail Chadwick – interested in possible contribution (Meeting May 9th to discuss)


  • ·         Book to be published by end of year – Handbook on Crime over the Lifecourse - conducted an analysis of maintained websites – standards and comparison of criteria and rigor of research – (He could do further analysis of this work for an entry in the journal)
  • ·         Discussed Mark Lipsey’s work – presents an alternative to Blueprints.  Mainstream to change and adapt, presents an incremental strategy.  Explores characteristics of effective strategies based on meta-analysis (regression - what predicts what is most effective) – additional analyses may be worth pursuing to provide the debate and alternatives.
  • ·         Discussed NREPP - clinical interventions - the use of clinical judgment and practice – may have as much of an impact as evidence.
  • ·         Discussed IOM – 2008 critique of evidence-based lists.
  • ·         Office of Justice programs (OJP) – www.crimesolutions.gov 


Meeting with Lance Till (possible content areas)
  • ·         Evolution within fields (historical entry) - how have standards changed overtime? – what is good enough has changed dramatically?
  • ·         The debate around “What is good Enough” from en evaluation standpoint  for choosing programs to fund/implement.
  • ·         Methodology by fields – an evaluation of standards
  • ·         Deconstruction – evaluation – what evaluation should you be reviewing in order to choose a program to implement – what types of support are needed for fidelity?
  • ·         What are the issues/challenges with having various rating systems? – What are the implications to the field(s)?
  •       Questions we have – What is available as far as social media?  Will there be interactive polling capabilities (could contribute to one of the articles)?


Conference call with Abigail Fagan
  • ·         Abby is interested in contributing to the content of the journal. 
  • ·         She recently presented with Del Elliott on Standards for evaluation, and commonly used lists for determining what constitutes an evidence-based program.   Lists are often different by topical areas and vary by criteria used.
  • ·         Discussed the debate and those against replication of canned programs – what are the implications of effective practices?
  • ·         Discussed that there are often mixed findings for programs – how do you work through these types of challenges? 
  • ·         Discussed evaluation and fidelity of Implementation.   




Tuesday, April 24, 2012

Notes from Jonny and Today's Meeting 
As I promised during our phone call today, here are my thoughts on the process and presentation elements of the special issue we discussed. This document ended up a bit longer than I had planned, but it gave me a chance to organize some thoughts that have been rattling about half formed in my mind for some time. I leave it to the rest of you to initiate the discussion on content.

I’m taking the liberty of ccing Chris Tancock, EPP’s point of contact at Elsevier. Cris has a keen interest in this topic, and may have some ideas about how we should proceed. Chris – weigh in as you are so inclined. Tarek – when you set up the planning blog, please include Chris as a member.

My background
By way of background, my thoughts on this topic result from a confluence of various themes. First, I have long been interested in social networking as a way of generating and sharing knowledge, and also as a means of community building. I tried to do this, not very successfully, at my last job, and equally unsuccessfully in getting our special issue editors to experiment with a social networking dimension to their efforts. Brian Yates is doing a special issue for us on Cost Inclusive Evaluation, and it looks as if he will succeed. I’m hoping that our present effort builds on that success. (I don’t know if Brian would be willing, but I could ask him to act as an advisor for us.)

Second, I have been greatly influenced by two books on social networking. The first is Wiki Government, the story of using social networking to improve the operations of the United States Patent Office. The second is Too Big to Know. The first made me appreciate how social networking needs to be structured to further productive collaboration. The second helped me understand some of the epistemological issues of knowledge in a networked world, and the value of traditional “long form” thought (aka books and articles) in a networked world.

 

Third, as a journal editor, I have been thinking a lot about Elsevier’s “article of the future” initiative and its implications for helping the evaluation community.

 

Finally, I have been involved in helping the U.S. Federal Railroad Administration to implement and evaluate its new social networking efforts. This has led me to learn about IBM’s jams and other similar activities.

 

Principles of action

Out of all the above has come a few principles that I think we should follow.

 

Coherent vision: There needs to be a vision shared by a small group as to what this special issue should be and what it should accomplish. This vision needs to guide the entire process. Varied opinions are nice, but collaboration does not mean that people can pull a project hither, thither and yon. I’m not saying that what we begin with should be written in stone, but I do think that changes should be made judiciously by organizing group.

 

Expertise: Larger groups of people have a greater amount of expertise and insight than smaller groups. Anything we can do to expand the flow of ideas,the better.

 

Diverse input: The number of people notwithstanding, diversity of expertise and background has its own valuable affect. Do a thought experiment. Do we want 10 experts with a background in sociology; or two in sociology, two in economics, two in political science, and two in public administration, and two public health?

 

Collaboration in cyberspace cannot be assumed: Visions of Wikipedia notwithstanding, it is not easy to elicit collaboration in cyberspace. One problem is simply critical mass. Any given topic may need only few active contributors and a few sometime contributors, but I bet the size of that critical mass is very stable no matter how big the population being drawn from. If 50 people are needed, it is easier to find them in a population of 50,000 than in a population of 500. (I have no doubt there is some good research on this topic. If anyone knows where it is, send it my way.) This means that we will probably have a hard time reaching enough motivated contributors.  This brings me to my second point. There are four ways we can get the collaboration we seek. First is to make sure we are dealing with a hot topic. (No problem there, we are.) Second we can lower the transaction cost of getting involved. This means making it easy for people to participate. Third, we can reward people, chiefly by encouraging them to talk about what they are experts in. Forth, we need to do everything we can to solicit advice from as large and diverse a population as possible.

 

Spreading the word is good for business: Enough said.

 

Use model

The final product will be a set of articles that look as if they are in traditional form, but which are accompanied by a great deal of extra material that is link to the text. This is easily enough done with Elsevier’s functionality.

 

Note the term “final product” in the above paragraph. We could think of what we are doing as never being “final”, but rather as something with a significant milestone, and continuing lower level activity after that. If people think they won’t run out of steam we can think in these terms. For now, let’s assume we will do something and finish.

 

Content should be determined by a combination of our beliefs about what is needed, and suggestions from the public. I see us constructing a short blurb containing a short description of what we are planning and a draft outline, and then disseminating it as widely as possible. Old fashioned email would be fine. The electronic suggestion box would be open.

 

Between our initial thinking and the suggestions, we will end up with a set of topics and as set of authors. A small group will be responsible for each topic, but all involved with have visibility into what the others are doing.

 

As topics are developed, the people doing the work will be charged with identifying particular questions that they would like input on. I don’t see us as saying to the world: “We are writing about why judgments on best practice differ, what do you think?” I see us as saying something like: “We are writing about why judgments on best practice differ. We are wrestling with two questions. 1) Specific methodological expertise notwithstanding, does hands-on service delivery experience affect judgments about whether a recommended best practice is in fact a best practice?” 2) Does subject matter affect judgments of the quality of research? For instance, would people come to different conclusions if the topic were child welfare, obesity prevention, conflict resolution, or safety in industrial settings?” By directing questions like this we will accomplish two objectives. First, we will force the topic organizers to think about what they are doing. Second, we make it interesting for people to participate because we are appealing to their interests and intellectual proclivities.

 

As the above proceeds, it seems likely that communities may form. For instance, it’s not hard to imagine a group of people really interested in the “hands-on” question. Whatever functionality we use, we need to make it easy for these kinds of groups to self organize.

 

The functionality I am describing will require a fairly sophisticated social networking site and some able system administrators. I’m not sure how to get all this, but I’m hoping the Claremont people can help.

 

There are still many more ideas about this in my head, but they are still half formed, I’m tired, and I have to pack to go off on a data collection junket tomorrow. At least this is a start.



Jonny
Jonathan A. Morell Ph.D.
Director of Evaluation
Fulcrum Corporation
734 646-8622
jmorell@fulcrum-corp.com
Respect data. Trust judgment.