Flink does not support direct connections to Hive as it is supported in Spark with SQL context. But there is a simple way of analyzing data in Hive table in Flink using Flink Table API
What you need to do is first get the exact HDFS location of the Hive table you wish to analyze with Flink e.g.
hdfs://app/hive/warehouse/mydb/mytable
Then you read data
DataSet<Record> csvInput = env
.readCsvFile("hdfs://app/hive/warehouse/mydb/mytable/data.csv")
.pojoType(MyClass.class, "col1", "col2", "col3");
Then you need to create a table from the DataSet and then register it with the TableEnvironment
Table mytable = tableEnv.fromDataSet(csvInput);
tableEnv.registerTable("mytable", mytable );
And now you are all set to query this table using Table API syntax.
Here is a link to the sample code.
Hope this helps.