I expect that science will look entirely different in a hundred years. Today, we cannot systematically evaluate science in any other way than by "ticking off boxes" for published articles. Scientists are often forced to publish an excessive number of papers to accumulate as many citations as possible and climb the traditional career ladder. Moreover, a scientist's reputation today also depends on their visibility—how often they are seen or heard—rather than purely on the value of their outputs.
One of the fundamental flaws of current science is that research findings are often inaccessible. I believe and hope that the way science is conducted will soon become more rational in these and other respects.
Research data will play a major role in how science evolves. We already know that data are at least as valuable as their interpretation. Data can often be interpreted in multiple ways, and publishing only a single research outcome as a paper means losing a vast amount of information—and potential for further research. Moreover, data provide a way to verify the quality of research.
The FAIR approach to scientific inquiry and the broader Open Science concept offer a direction for improving the quality of science. Open Science advocates for making scientific results more accessible to people. FAIR-compliant data are essentially well-managed data, and through managing FAIR data, the EOSC CZ initiative aims to enable better and simpler utilization of research data.
Certainly, this is part of the issue. The main characteristics of standard scientific output, particularly article publishing, can be described as follows:
Firstly, there are incentives to publish what is referred to as MPU (Minimum Publishable Unit). This means that systemic pressures often lead researchers to publish four smaller articles rather than a single high-quality one, thereby accumulating more, let’s say, academic capital.
Secondly, and this is something that can be measured and quantified fairly well, there is a reproducibility problem. When someone publishes an article and another researcher attempts to replicate the results presented in that article, it is successful in only a small percentage of cases.
There are two main explanations for this: The most common reason, in my opinion, is that the methodology is not described with sufficient precision in the article, making replication of the research impossible. Alternatively, the author may have made an error or, in extreme cases, fabricated the data. These kinds of anomalies are numerous, and this broader issue is often referred to as the crisis in contemporary science. Open Science and research data attached to publications aim to at least partially address this problem.