With the advancement of artificial intelligence and machine learning methods, autonomous approaches are recognized to have great potential for performing more efficient TAS experiments. In our view, it is crucial for such approaches to provide thorough evidence about respective performance improvements in order to increase acceptance within the community. Therefore, we propose a benchmarking procedure designed as a cost-benefit analysis that is applicable not only to TAS, but also to any scattering method sequentially collecting data during an experiment. For a given approach, the performance assessment is based on how much benefit, given a certain cost budget, it is able to acquire in predefined test cases. Different approaches thus get a chance for comparison and can make their advantages explicit and visible. We specify the key components of the benchmarking procedure for a TAS setting and discuss potential limitations.