Unfortunately, WebInspect's integration with Selenium is essentially a replay of the Selenium scripts in real-time, used as the Crawl phase of the scan. WebInspect cannot simply consume your JAR file. It will require you set up a listener/proxy of some sort, so when the script replays, WebInspect can capture the traffic, and then it performs an Audit-Only of what it saw. There are two methods to insert this proxy technology into the process, as detailed in the WebInspect Help. The user must configure some features so when WebInspect replays the Selenium script everything connects automatically.
e.g. from WI 22.10:
file:///C:/ProgramData/HP/HP%20WebInspect/Help/WebInspect/index.htm#Selenium_WD_1.htm?TocPath=Using%2520WebInspect%2520Features%257CIntegrating%2520with%2520Selenium%2520WebDriver%257C_____0
Besides Selenium, there are several other alternatives when it comes to Functional Testing driven WebInspect scans. You had asked about requiring the dev staff to record something to provide to your for WebInspect.
Have the QA team capture their Selenium test runs using BURP Proxy. Have them save that captured proxy traffic as an artifact for the security team, e.g. "macro1.burpcap". Use the Workflow-driven Scan wizard options in WebInspect and simply Import that BURP capture as a native Workflow Macro. I like this option since BURP is easy to acquire and run, and supports multiple OS as a Java app.
WebInspect's Web Proxy could also be used, as BURP was used above. However, this complicates things for your dev team, as they do not have access to WebInspect. There are other free options for WebInspect customers which your dev team could install, including the Standalone WebInspect Toolkit, the Web Proxy standalone tool, or the Web Proxy API tool (REST service). One annoyance with all of these today is that they (currently) require Windows, and it requires an authorized WebInspect user (you) to download and deploy these installers inside your network for the dev staff to get.
The WebInspect REST API offers several endpoints for Proxy listeners. This means that remote users (i.e. your Dev) could spawn a proxy listener, run their functional test script through that proxy, then have the captured data saved as a Workflow Macro, and kill the listener. By itself, this combination could produce the artifacts your appsec team will want to use later in their Workflow-driven Scan.
To support this with further automation ("developer-driven DAST"), you could have those same proxy API calls add on a New Scan endpoint call at the end, to go ahead and trigger a Workflow-driven Scan using the Macro that was just recorded in the prior calls. Good for putting in a cicd pipeline, provided you have a dedicated WebInspect machine sitting on the network with its API available.
The challenge with using the WebInspect API as a "poor man's pipeline scanning tool" is that WebInspect is simplistic and has no resource management features in and by itself. This means that your pipelines could trigger lots of scans quickly, and the WebInspect machine would fall over after 4+ scans got started. What a mess! You would have to design API checks into your pipelines to monitor the number of Running scans on the WebInspect machine, and then Pause/Poll the pipeline until the WebInspect machine was free and the pipeline could then submit its new scan order.
Our solution for this sort of enterprise automation would be to use Fortify ScanCentral DAST instead of just WebInspect standalone. SCDAST affords a central web GUI for your appsec staff to configure/operate/review scans, with multiple headless "WebInspect" scan machines managed in resource pools. Scan orders coming in (REST API calls) would be queued and prioritized automatically, and the remote scan machines would be brought on-line/shutdown as needed (think as headless WebInspect API on Docker). So now your cicd pipelines can simply trigger the DAST scan and not worry about the scanner machine resources.
This brings me around to another great option for your Selenium needs, the Fortify FAST Proxy. This solution only operates with ScanCentral DAST, which is why I had to go on that side tangent above. With FAST Proxy, your dev would only start up the FAST Proxy (includes authentication details on the ScanCentral API), run their Selenium scripts through that proxy, and then kill the FAST Proxy when done. That completes their Functional Testing with Selenium. Meanwhile, on shutdown, the FAST Proxy automatically delivers the captured traffic to ScanCentral as a New Workflow-driven Scan order. In a little while, the DAST scan of their Selenium script traffic has completed. If you configured Notifications, dev now receives a link to their appsec results.