Over the summer, my department held a series of teaching workshops. These workshops gave us a chance to come together and reexamine classroom practices such as in-class exercises, group projects, and educational simulations. The workshop I have continued to think about most was the one on the role of academic research in teaching business students. The presenters in that workshop did the best job I’ve seen of making the case for why academic research helps us to understand the business world, beyond the understanding derived from years of in-the-field business experience.
Before I relay that case to you, I want to share the context for the discussion. Students who major in business often choose it because they see it as a straight path to a career. As a group, our students tend to be more interested in practical application than esoteric intellectual inquiry. I am not taking sides here: I love a good esoteric intellectual inquiry as much as a the next gal, but I also get the appeal of practical application.
Our faculty include both traditional doctorate-holding academics and experienced business professionals. While that distinction is salient to faculty, it appears largely lost on the students. Students know a good teacher when they see one: someone who knows her stuff, is interested in them, can hold attention, and grades fairly. Most of our students don’t know what an “assistant professor” is, or how that title differs from a “senior instructor.” (Both teach, but an assistant professor is active in research, while a senior instructor does not have research responsibilities.)
Therefore, a natural question to ask is “if the students are more interested in practical application, why do we have traditional academics teaching in business schools at all?”
My colleagues in the summer workshop described two strengths developed by academic research. First, academic research tackles head-on what is “luck” vs. what is a measurable effect. We can think of luck as unexplained variance in a model, that is, variation in the outcome that is not systematically linked to variation in something else (an action like promotion or a characteristic like time, location, or gender). Second, academics are sticklers about claims of causation. Just because two things co-occur does not mean one causes the other. Lots of police officers in areas with high crime? We do not conclude that police officers are causing the crime! Similarly, high price and high demand for a new product? We don’t conclude causation.
Years of experience provide insights about the world, but they don’t have the systematic approach of a good academic study for discerning what is true. To tell students, “this is how we did it at Acme, Inc.” is OK, but it’s even better if our business-professionals-turned-college-instructors can scrutinize their own experience to determine what evidence supported this practice or the other. Luck is underrated by most people.
The workshop session turned to a discussion of whether the primary value to students of our academic backgrounds was in (1) our knowledge of the substantive findings in our field(s) and the ability to evaluate, digest, and communicate them, or (2) our expertise about the process: curiosity about the way the world works and our training in gathering evidence to answer questions.
The challenge with (1) is that substantive findings are so often “it depends.” For example, in online advertising, is it more effective to show someone a more specific ad (e.g., a retargeted ad–those ads for products you have looked at that seem to be following you around on the web) or is it better to show a more generic ad (e.g., an ad featuring the brand but not a specific product)? And the answer is…you guessed it…it depends. What does it depend on? On where the person is in the search process. Which, unfortunately, may be practically difficult to ascertain. Are there specific action steps I can give to my students based on that conditional logic? Maybe.
We all agreed that we have a lot to offer our students on benefit (2). All educators share the goal of improving our students’ “critical thinking.” The skill of imposing structure on an important question and marshaling evidence to answer it is a nice example of a very useful type of critical thinking. I think the term critical thinking is overused and usually too vague, but this particular stripe of critical thinking is concrete enough to mean something.
Months after the workshop, in composing this post, I have been thinking about a third classroom benefit from our academic backgrounds: an analytical approach to teaching. If we think about what constitutes evidence of student learning, we realize that the same sort of approaches in our research are relevant for the classroom. What question could I ask the students to distinguish the ones who really understand an important point from the ones who have what I know to be a common misconception? Those questions aren’t just for grading. They are an important part of a feedback loop in the classroom: “are they getting it?” If not, what can I do next to help them get it?
On our campus, our science departments lead the way in this diagnostic approach to teaching. For both our research faculty and our faculty with substantial industry experience, following the example of our colleagues in science is a worthy goal. I hope this third benefit also becomes part of the conversation in our halls.