A popular term in the field of optical sensing right now is "task-specific sensing." It is a system design paradigm in which the relationships between a system's components are optimized towards the purpose of the system. This is opposed to the idea of making the components perform as efficiently as possible on their own. For example, a system that only needs to detect an object in its field of view does not need to have a lens design that reduces aberrations and increases spatial resolution. Instead, a scene simply has to be imaged onto the sensor in a way that facilitates efficient image processing by the software. In other words, the relationships between the optics, electronics, and software should be optimized towards the goal of detecting an object, not seeing it clearly.
Nature has been performing task-specific design for a long time. The compound eye of a fly has very poor resolution since each bump on the eye acts as a single lens that couples to a sensing structure. Fortunately for the fly, it does not need to see well to find food. It does however need to avoid predators if it wishes to remain alive. The fly's eye has an extremely large field of view so that it can see things such as flyswatters coming at it from many different angles.
The task-specific paradigm has also led me to think about how to value research projects in academia. There is a very common notion that any research is good research. However, if a project creates some device or accomplishes some goal with no particular application in mind, then the idea of task-specific sensing might suggest that the simplest and least costly approach was not to have done the research at all since no need for it existed. I'll pose the question like this: which should come first, the need for a researched solution, or the solution itself?
Of course, the argument exists that research performed without a particular need may eventually find its uses, but I think that my question is still a valid one to ask before placing value upon research.