I'm trying to build a quality assurance app in Swift using ARKit.
I'd like to position a USDZ model on top of its real world counterpart similar to this video...
I have been able to successfully accomplish this by using both image tracking and plane detection but both methods have some issues regarding "drifting." My model tends to drift over time as the user moves around the object. I am trying to determine an approach that initially places the model using plane detection (since that seems to be the best way to keep the object anchored) and then use image anchors to "reset" the model's location as they are detected. I am having a hard time understanding how the playing cards shown in the video can be helpful given that they are arbitrarily placed on the surface. I can see how they could be helpful in re-anchoring in the Z direction, but can they really be helpful to realign the X and Y since they do not have known coordinates themselves?
I'd appreciate any advice or suggestions. Perhaps if I am overlooking a completely different approach that is better suited for this task.
UPDATE
I use a state based setup to wait for the user to...
- Select a Plane
- Select the first point on the plane
- Select the second point on the plane
The model is then added to the selected plane, and rotated to match the angle between point 1 and point 2.
The origin of the USDZ model is at the centerline, but my users typically want to align the corresponding surface to the selected plane so they are required to manually move the model up/down, left/right, etc... using GUI controls to get it into its final position.
Here is the code I am currently using to initially place my model...
Select desired plane...
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard
let updatedAnchor = anchor as? ARPlaneAnchor,
let selectedAnchor = selectedPlaneAnchor,
updatedAnchor.identifier == selectedAnchor.identifier
else {
return
}
self.selectedPlaneAnchor = updatedAnchor
}
Get first and second point to define rotation...
@objc func handleTap(_ gestureRecognizer: UITapGestureRecognizer) {
// Get the location of the tap in the sceneView
let tapLocation = gestureRecognizer.location(in: sceneView)
// Perform a hit test to find the ARPlaneAnchor at the tap location
let hitTestResults = sceneView.hitTest(tapLocation, types: .existingPlaneUsingExtent)
if let result = hitTestResults.first {
if let detectedPlaneAnchor = result.anchor as? ARPlaneAnchor {
let p3 = result.worldTransform.columns.3
switch currentState {
case .initial, .loading, .ready:
break
case .surface:
if selectedPlaneAnchor == nil {
selectedPlaneAnchor = detectedPlaneAnchor
currentState = .point1
}
case .point1:
if detectedPlaneAnchor.identifier == selectedPlaneAnchor?.identifier {
firstPoint = SCNVector3(p3.x,p3.y,p3.z)
addPointNode(location: firstPoint!)
currentState = .point2
}
case .point2:
if detectedPlaneAnchor.identifier == selectedPlaneAnchor?.identifier {
secondPoint = SCNVector3(p3.x,p3.y,p3.z)
addPointNode(location: secondPoint!)
addAssemblyNode()
currentState = .ready
}
}
}
}
}
Add model to plane and rotate accordingly...
func addAssemblyNode() {
guard
let usdzURL = Bundle.main.url(forResource: "art.scnassets/assembly", withExtension: "usdz"),
let p1 = firstPoint,
let p2 = secondPoint
else {
return
}
let assemblyScene = try! SCNScene(url: usdzURL, options: nil)
let directionVector = SCNVector3(p2.x - p1.x, p2.y - p1.y,p2.z - p1.z)
let rotationAngle = atan2f(directionVector.z, directionVector.x)
if let node = assemblyScene.rootNode.childNode(withName: "main", recursively: true) {
node.name = "main"
node.position = p1
node.eulerAngles.y = -rotationAngle
//node.scale = SCNVector3(x: 0.125, y: 0.125, z: 0.125)
sceneView.scene.rootNode.addChildNode(node)
assemblyNode = node
}
}
Here is a video of the result where you can see the drift. You can see the large blue plate on the end move back into position when I move to my original location.