Joins are evil. In particular on Hadoop where we can't guarantee data co-locality especially in case we need to join two large tables. This is one of the differences between Hadoop and a traditional MPP such as Teradata, Greenplum etc. In an MPP I evenly distribute my data based on a hashed key across all nodes in my cluster. The relevant rows for order and order_item table would end up on the same nodes in my cluster, which would at least eliminate data transfer across the network. In Hadoop you would nest the order_item data inside the order table, which would eliminate the need for joins.
If on the other hand you have a small lookup/dimension table and a large fact table you can broadcast the small table across all nodes in your cluster thereby eliminating the need for network transfer.
In summary, star schemas are still relevant but mostly from a logical modelling point of view. Physically you may be better off denormalizing even further to create one big columnar compressed and nested fact table.
I have written up a full blog post that discusses the purpose and usefulness of dimensional models on Hadoop and Big Data technologies