If you want to know about a physical environment, use sensors. Take pictures, take air quality readings, take water quality readings, detect radiation levels, use strain gauges, listen to and analyse noise and the signals in it. Smell, measure, touch (If it’s not dangerous to do so) what’s around you. Find people who can teach you how to analyse what you see: imagery interpreters, naturalists and earth scientists for those pictures, acoustics experts, ornithologists and whale experts (as appropriate) for the noises. You’ll learn a lot, and maybe question a lot too, like why some places have higher radiation readings than others, but at some point you’ll realize that the data you have available is limited by the number of sensors you can use or place by yourself.
Which is where crowds come in. If you want to understand how the environment changes over time, or how the environment differs over a wide area, you’ll either have to dedicate a lot of time to placing sensors, or work out how to involve other people and organisations in your data gathering.
Journalists are used to crowdsourcing now. In truth, they’ve been used to it for perhaps longer than even they know: my investigations into the roots of sensor journalism found Francis Galton crowdsourcing weather observations in 1870, and earlier than that, groups of amateurs collecting readings for the Central England Temperature series from 1659 to the present day. Skip forward hundreds of years, and it’s not unusual for a data journalist to crowdsourced open data, or the Mechanical Turk crowdsourcing site to sift through documents and data.
But how do we ask a crowd of people to place sensors for us? How do we find them? How do we equip and train them? What are our legal responsibilities when we’re asking them to do physical tasks in the outside environment? How do we guarantee that the data we get back will be good – or at least, of a known and consistent quality?