I have been looking for "strategic alternatives" to my current employment for a while now, and I have discovered there is not much of a market for generalist, particularly at junior level positions. The algorithms used by online job applications to weed out the chaff are rarely impressed with the argument, "My wide array of interests and experiences position me well to specialize in whatever tasks required."
Truthfully, I don't blame the developers of these algorithms or recruiters. My argument is so abstract and intangible, it is a nightmare for statisticians to develop concrete indicators to output any value that provides a useful comparison between candidates. Think about it: how does a generalist go about proving they can thrive in a position despite having only a shallow understanding of the technical issues at hand?
Employers tend to focus on what is much easier to prove: Is the candidate technically competent in the required field? For example, if I was hiring an engineer that specialized in superalloys, asking: "What are some properties of superalloys?" is a useful question. From a liberal arts perspective, if I was interested in someone who was an expert on China, the inability to identify Beijing on a map is very telling.
These questions have concrete, measurable answers, that make comparing applicants easy. Non-technical and soft skills are far harder to judge - something that sucks for generalists.
Some companies (particularly in the technology sector), have tried to be creative by asking applicants unusual brainteasers. However, Google has said their data shows no correlation between such questions and good hiring. Just as interesting, excluding fresh graduates, they found little correlation between college GPAs and quality of work.
Employers have long used a mixture of personality, IQ test, and standardized questions to help judge whether a candidate has the less easily definable non-technical skills and traits that make them a good fit. Historically, the benefits have been questionable. However, there has been an significant uptick in their use in the last few years as researchers have utilized big data to refine and tweak tests for optimal results. Properly designed, there appears to be significant value in combining "personality, cognitive, and interests" tests.
However, the usefulness of these tests are all based on a statistical models that make use of vague or questionable qualitative assumptions. For example, how does the researcher define "job performance or a 'good hire'?" Is it based on employee longevity? Perhaps labor productivity in revenue?
An interesting concept that may refute some of the usefulness of these tests is whether they promote "group think." That is, do they weed out the eccentric or unusual that could produce truly disruptive (in a good way) results? I wonder how General Patton would do on these tests? I won't pretend I have conducted a significant amount of research on this topic. Indeed, I have spent about two hours on this post. However, I think it is an interesting topic, and from the little I have read, I haven't seen much that really addresses this issue.
Nevertheless, employers need some way to weed out candidates, and quantitative data has proven to be cheap, and reasonably effective. Yet, as a generalist looking for a job who rarely even gets to the "test taking steps" due to not having a hard specialty, I can't help but wonder if all it really does is promote conformity.
Next post will likely (maybe) be on the: US-Filipino Relations and China.
No comments:
Post a Comment