There is no automated/built in support today for isolating the usage of particular notebooks on Databricks.
That being said, one approach would be to use the Ganglia Metrics available for Databricks clusters.
If you run both notebooks at the same time it will be difficult to discern which is responsible for a specific quantity of usage. I would recommend running one notebook to completion and taking note of the usage on the cluster. Then, run the second notebook to completion and observe its usage. You can then compare the two and have a baseline for how each one is utilizing resources on the cluster.