I have an API which has a response like this
{conditions:{,…}
individuals:[{employee_id: 12300, employee_type: "Permanent", person_id: "1211211", full_name: "John Jacobs",…},…]
page_num:5
rows:10
total_count:213}
Each entry inside the individuals
array looks like this
[ { employee_id: 12300,
employee_type: 'Permanent',
person_id: '1211211',
full_name: 'John Jacobs',
email_id: 'john_jacobs@gmail.com',
person_role: [
{rg_code: "AP",
cl_code: "12",
data : "South East Asia",
loc_code : "IN"},
{rg_code: "CD",
cl_code: "15",
data : "Middle East Asia",
loc_code : "QY"},
{rg_code: "AP",
cl_code: "12",
data : "South East Asia",
loc_code : "IN"},
{rg_code: "DF",
cl_code: "34",
data : "South East Europe",
loc_code : "FR"}
],
staffings: [ {id: 1244,
ind_id: 113322,
p_id : 112,
p_name: "Bollywood"},
{id: 1245,
ind_id: 112322,
p_id : 113,
p_name: "Tollywood"},
],
first_name: 'John',
last_name: 'Jacobs',
location:
{ country: 'India',
region: 'South Asia',
code: 'SA/IN',
name: 'Bangalore' },
assistants: [ {} ],
job_title: 'SSE-2',
person_full_name: 'John Jacobs'}
]
I'm trying to find all entries inside the individuals
array that have duplicate entries of same loc_code
inside person_role
- for eg - in the example entry given below there are two entries for loc_code = 'IN'
. Is the solution possible without use of for loops
and only with the use of filter
and reduce
methods?