It is not equivalent.
When you set the Load Balancer to Internal, the Load Balancer network interfaces will only receive private IP addresses according to the availability zones and subnets you place it in. Only systems using an internal IP address that is routable to the Load Balancer IPs will be able to access the Load Balancer. This will mean clients need to be in the same VPC or connected via VPN, Direct Connect, etc.
When you set the Load Balancer to Public, the Load Balancer will receive public IP addresses on the external interfaces and private IP addresses on the internal interfaces according to the availability zones and subnets you place it in. For this reason, incoming requests to the Load Balancer will be over the public internet, using public IP addresses - even if it is from a client located in the same availability zone as the load balancer. It will also mean that both the Load Balancer and client systems will need access to the public internet. If they are located in AWS that means they will have to be placed in a Subnet that is configured as public and has an internet gateway. Finally, if you wanted to filter access based on a security group you would have to filter it based on the public IP addresses of your client systems.
On the internal side of things (i.e. between the load balancer and the server) nothing changes. The server/service will see connections only from the internal IP addresses of the load balancer.
So, more directly, the big difference is that connections between the client and an internal load balancer will be private and stay within your VPC or connections between the client and public load balancer will be public and traverse the public internet, even if that means it is somewhere within the AWS datacenter. This will directly affect your security groups, if they use private IP addresses or public IP addresses.