Controversially, more data is not necessary better than less data. The explosion of the data lead to a number of interesting practical and theoretical problems. Among those problems are the need to filter, process, verify, index, distribute, protect and make redundant copies of the data. This data 'massaging' usually take a lot of time and processing power. However, the quantity of the collected data does not necessary mean quality, as a lot of data is repetitive or does not contain any new information. Nevertheless, it still has to be processed, filtered, consumes high communication volume, has to be protected from breaches and from storage failures. In this position paper we propose to perform data reduction techniques on the collected (big) data prior to gathering of the data in a single location. In many cases (exemplified by two use-cases), especially in Internet-of-Things (IoT), those techniques might save tremendous amounts of power, processing time and network traffic.