Every log backup doesn't only back up a single log, it also backs up the catalog. If the catalog size is 1 MB or 5 MB, it will not massively impact the runtime. But if the backup size is e.g. 200 MB, it can have a big impact, because the log sizes can then be smaller than the catalog (e.g. 64 MB for statisticsserver log, 8 MB for xsengine log). I have seen a system with a backup catalog having a size of 23 GB (and 54 million records). You can imagine that each individual log backup took ages. So I agree with Geo that the catalog size is an important factor for the log backup performance.
The easiest way to slow down the log backup due to a large backup catalog is to force an error at the beginning of the log backup (e.g. due to a erroneously set mountpoint). Then SAP HANA will permanently try to start the log backup and fail. Every failure will be recorded in the backup catalog and so after some hours you will have millions of entries. If you do this you have to make sure that you are at least on revision 74.01, because with older revisions the cleanup of the catalog will take ages due to an inefficient record-wise deletion strategy.