17 stories
·
0 followers

Academia as an ‘anxiety machine’

1 Share

We learned recently of the suicide of Stefan Grimm, a successful professor at the prestigious Imperial College in London. Professors Grimm regularly published highly cited articles in the best journals. He was highly productive. Unfortunately, some of his colleagues felt that he did not secure sufficiently large research grants. So he was to be fired.

It is not that he did not try. He was told that he was the most aggressive grant seeker in his school. He worked himself to death. I am willing to bet that he was failing my week-end freedom test. But he still failed to secure large and prestigious grants because others were luckier, harder working, or even smarter.

It is not remarkable that he felt a lot of pressure at work. It is not remarkable that he was fired despite being smart and hard-working. These things happen all the time. What is fascinating is the contrast between how most people view an academic job and Grimm’s reality.

A fellow professor described academia as an ‘anxiety machine’:

Throw together a crowd of smart, driven individuals who’ve been rewarded throughout their entire lives for being ranked well, for being top of the class, and through a mixture of threat and reward you can coerce self-harming behaviour out of them to the extent that you can run a knowledge economy on the fumes of their freely given labour.

(…)

I know plenty of professors and star researchers who eat, sleep and breathe research, and can’t understand why their junior colleagues (try to) insist on playing with their children on a Sunday afternoon or going home at 6. ‘You can’t do a PhD and have a social life’, my predecessor told me.

It is simply not very hard to find overly anxious professors. I know many who are remarkably smart and who have done brilliant work… but they remain convinced that they are something of a failure.

Successful academics have been trained to compete, and compete hard… and even when you put them in what might appear like cushy conditions, they still work 7 days a week to outdo others… themselves… and then, when they are told by colleagues that it is not yet enough… they take such nonsensical comments at face value… because it is hard to ignore what you fear most…

And it is all seen as a good thing… without harsh competition, how are you going to get the best out of people?

Did you just silently agree with my last sentence? It is entirely bogus. There is no trace of evidence that you can get the best out of people at high-level tasks through pressure and competition. The opposite is true. Worried people get dumber. They may be faster at carrying rocks… but they do not get smarter.

Stressing out academics, students, engineers or any modern-day worker… makes them less effective. If we had any sense, we would minimize competition to optimize our performance.

The problem is not that Grimm was fired despite his stellar performance, the problem is that he was schooled to believe that his worth was lowered to zero because others gave him a failing grade…

Source: Thanks to P. Beaudoin for the pointer.

Read the whole story
ddiam
3623 days ago
reply
Share this story
Delete

The lingering seduction of the page

2 Shares

In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.

First stop: the library

Information Architecture for the World Wide WebThe seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”

First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.

If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”

Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.

On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).

There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.

Reading brains

We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.

In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.

It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.

Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.

When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.

In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.

The trouble with systems (and why they’re worth it)

Thinking in Systems: A Primer

Photo: Andy Fitzgerald, of content from Thinking in Systems: A Primer, by Donella Meadows.

I have written elsewhere in more detail about challenges posed to linguistic thinkers by systems. To put all of that in a nutshell, complex systems baffle us because we have a limited capacity to track system-influencing inputs and outputs and system-changing flows. As systems thinking pioneer Donella Meadows characterizes them in her book Thinking in Systems: A Primer, self-organizing, nonlinear, feedback systems are “inherently unpredictable” and “understandable only in the most general way.”

According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.

This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.

Fumbling toward system literacy

In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.

This will be hard work. The payoff, however, is potentially revolutionary. The threshold at   which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.

What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.

As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.

Read the whole story
ddiam
3941 days ago
reply
Share this story
Delete

JMP Vox: Introducing Vox columns on Job Market Papers

1 Share
The Editors, 4 January 2014

Vox has launched a new tab – JMP Vox – that features short Vox columns written by PhD candidates on their Job Market Papers. The main goal is to provide a platform for excellent research that will not appear in journals or major discussion paper series for years. It is also a means for established economists to more easily track the research of the youngest members of the profession.

Full Article: JMP Vox: Introducing Vox columns on Job Market Papers
Read the whole story
ddiam
3959 days ago
reply
Share this story
Delete

Continuing Education

1 Comment

PHILADELPHIA — The annual meeting of the Allied Social Science Association is many things.  (The ASSA is a collection of some sixty organizations, of which the American Economic Association is by far the biggest. The American Finance Association is second.)   It’s a job fair, a ceremonial pageant, a swift channel into print for late-breaking work by a fortunate few dozen researchers, a public relations event, and a continuing education opportunity for most of the twelve thousand or so professors, policy economists and other members who registered this year.  (Not all of them made it through the snow!)

The PR event this year surely had to do with the hundredth anniversary last month of the founding of the Federal Reserve System. Many session were devoted to the Fed’s role in the economy, including one that featured presidents of four regional banks – William Dudley of New York, Charles Plosser of Philadelphia, Eric Rosengren of Boston, and Narayana Kocherlakota of Minneapolis (who was snowed in) talking about the combination of decentralization and consensus-seeking central guidance that is the system’s most distinctive feature

Ben Bernanke, retiring this month after eight years as chairman, received an unprecedented standing ovation after a talk to an audience of some two thousand economists – a sign of the growing recognition that strong leadership from the Fed (not just its governors and bank presidents but its senior staff) staved off a concatenation of events that might have dwarfed the Great Depression

President-elect William Nordhaus, of Yale University organized the meetings; in-coming president-elect Richard Thaler, of the University of Chicago’s Booth School of Business,  will preside over next year’s meetings, in Boston.  Raj Chetty, of Harvard University, received the John Bates Clark Medal, now given annually to the most influential economist under forty.

Claudia Goldin, of Harvard University, delivered the AEA presidential address (“A Grand Gender Convergence”).  James Heckman, of the University of Chicago, gave the Econometric Society presidential address (“The Economics of Human Development”).  Jeremy Stein, a governor of the Federal Reserve Board gave the joint AEA/AFA lecture.  James Poterba, of the Massachusetts Institute of Technology, gave the Ely lecture (“Retirement Security in an Aging Population”).  MaPaul Milgrom, of Stanford University, and Roger Myerson, of the University of Chicago, spoke at a luncheon honoring Lloyd Shapley, of the University of California at Los Angeles, and Alvin Roth, of Stanford University,  last year’s winners of the Nobel Prize.

Four Distinguished Fellows were named, a kind of consolation prizes for those unlikely to become the association’s president: theorist Harold Demsetz, of the University of California at Los Angeles; central banker Stanley Fischer, expected to be nominated soon to be vice chair of the Fed; econometrician Jerry Hausman, of MIT; and regulatory economist Paul Joskow, president of the Alfred P. Sloan Foundation..

Interviews of the most recent crop of graduates to enter the job market will yield invitations to many to visit campuses to talks; some will eventually receive offers. Outcomes of this race to the beginning of the course won’t come clear until April. What’s striking is how much time and effort department members spend in choosing next year’s junior colleagues.

The new material presented in around twenty privileged sessions designated by the president will appear in May, in the “Papers and Proceedings” number of the American Economic Review. The AER this month increased its frequency to monthly (up from five issues a year in 2010), reflecting a dramatic increase that has occurred over the last quarter century in the breadth , depth and relevance of the discipline.

And the continuing education, my own and others?  Watch this space.

Share/Bookmark

Read the whole story
ddiam
3959 days ago
reply
Quick summary of the ASSA meetings.
Share this story
Delete

Not all citations are equal: identifying key citations automatically

1 Share

Suppose that you are researching a given issue. Maybe you have a medical condition or you are looking for the best algorithm to solve your current problem.

A good heuristic is to enter reasonable keywords in Google Scholar. This will return a list of related research papers. If you are lucky, you may even have access to the full text of these research papers.

Is that good enough? No.

Scholarship, on the whole, tends to improve with time. More recent papers incorporate the best ideas from past work and correct mistakes. So, if you have found a given research paper, you’d really want to also get a list of all papers building on it…

Thankfully, a tool like Google Scholar allows you to quickly access a list of papers citing a given paper.

Great, right? So you just pick your research paper and review the papers citing them.

If you have ever done this work, you know that most of your effort will be wasted. Why? Because most citations are shallow. Almost none of the citing papers will build on the paper you picked. In fact, many researchers barely even read the papers that they cite.

Ideally, you’d want Google Scholar to automatically tell apart the shallow citations from the real ones.

This whole problem should be familiar to anyone involved in web search. Lots of people try to artificially boost the ranking of their web sites. How did the likes of Google respond? By getting smarter: they use machine learning to learn the best ranking using many parameters (not just citations).

It seems we should do the same thing with research papers. Last year, I looked in the problem and found that machine learning folks had not addressed this issue to my satisfaction. To help attract attention to this problem, we asked volunteers to identify which references, in their own papers, were really influential. We recently made the dataset available.

The result? Our dataset contains 100 papers annotated papers. Each paper cites an average of 30 papers, out of which only about 3 are influential.

If you can write software to identify these influential citations, then you could make a system like Google Scholar much more useful. The same way GMail sorts my mail into “priority” and “the rest”, you could imagine Google Scholar sorting the citations into “important” and “the rest”. Google Scholar could also offer better ranking. That would be a great time saver.

Though we make the dataset available, we did not want to passively wait from someone to check whether it was possible to do something with it. The result was a paper recently accepted by JASIST for publication, probably in 2014. Some findings that I find interesting:

  • Out of dozens of features, the most important feature is how often the reference is cited in the citing paper. That is, if you keep citing a reference, then you are more likely to be building on this reference.
  • The recency of a reference matters. For example, if you are citing a paper that just appeared, your citation is more likely to be shallow.
  • We have been using citations to measure the impact of a researcher, through the h-index. Could we get a better measure if we gave more weight to influential references? To this end, we proposed the hip-index and it appears to be better than the h-index. See Andre Vellino’s blog post on this topic.

You can grab the preprint right away. Though I think the paper does a good job at covering the basic features, there is much room for improvement and related work. We have an extensive future work section which you should check out if you are interested in contributing to this important problem.

Credit: Most of the credit for this work goes to my co-authors. Much of the heavy lifting was done by Xiaodan Zhu.

Read the whole story
ddiam
4006 days ago
reply
Share this story
Delete

Reading Made Awesome: The Features of Ebook Apps You Should Be Using

1 Comment

Reading Made Awesome: The Features of Ebook Apps You Should Be Using

Reading books on tablets or phones is awesome. There, I said it and I'm not taking it back. While the biggest advantage of reading on a mobile device is convenience and a huge portable library, there are a ton of features that make the experience awesome.

Read more...


    






Read the whole story
ddiam
4041 days ago
reply
Reading ebooks better.
Share this story
Delete
Next Page of Stories