2

After some days breaking my head because cygnus persist randomly the updates, I have found in the logs that the size of a generated name space is too long.

I'm working on Centos 7 my entities use the standard type: BikeHireDockingStation The error says that the namespace generate is too long (127 caracteres). It generates 167:

sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1

but even if I change the type to bike, the size is 124.

here you can see the part of the log error that I obtain when I call: $ docker container logs fiware-cygnus

time=2019-09-09T21:14:14.176Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | 
subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches | 
msg=com.telefonica.iot.cygnus.sinks.NGSISink[399] : CygnusPersistenceError. -, 
Command failed with error 67: 'namespace name generated from index name 
"sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1"
 is too long (127 byte max)' on server mongo-db:27017. 
The full response is { "ok" : 0.0, "errmsg" : "namespace name generated from index name 
\"sth_malaga.sth_/_urn:ngsi-ld:BikeHireDockingStation:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1\" 
is too long (127 byte max)", "code" : 67, "codeName" : "CannotCreateIndex" }.
 Stack trace: 
[com.telefonica.iot.cygnus.sinks.NGSISTHSink$STHAggregator.persist(NGSISTHSink.java:374), 
com.telefonica.iot.cygnus.sinks.NGSISTHSink.persistBatch(NGSISTHSink.java:108), 
com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:391), 
com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:373), 
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67), 
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145), java.lang.Thread.run(Thread.java:748)]

Is it possible that the maximum size for a type is 5? (with 5 the size of the namespace is 126)

Can you help me to solve this problem?

I have tried different scenarios:

fiware/orion:latest fiware/cygnus-common:latest mongo:3.6

This one has the result:

time=2019-09-12T17:12:17.071Z | lvl=WARN | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-common | op=doPost | msg=org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet[186] : Received bad request from client. 
org.apache.flume.source.http.HTTPBadRequestException: Request has invalid JSON Syntax.
    at org.apache.flume.source.http.JSONHandler.getEvents(JSONHandler.java:119)
    at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:184)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:326)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
    at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
    at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2
    at com.google.gson.Gson.fromJson(Gson.java:806)
    at com.google.gson.Gson.fromJson(Gson.java:761)
    at org.apache.flume.source.http.JSONHandler.getEvents(JSONHandler.java:117)
    ... 16 more
Caused by: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2
    at com.google.gson.stream.JsonReader.expect(JsonReader.java:339)
    at com.google.gson.stream.JsonReader.beginArray(JsonReader.java:306)
    at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:79)
    at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:60)
    at com.google.gson.Gson.fromJson(Gson.java:795)
    ... 18 more

with the configuration: fiware/orion:latest fiware/cygnus-ngsi:1.13.0 mongo:3.6

the result is:

time=2019-09-12T17:22:15.466Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[399] : CygnusPersistenceError. -, Command failed with error 67: 'namespace name generated from index name "sth_malaga.sth_/_EstacionBici:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1" is too long (127 byte max)' on server mongo-db:27017. The full response is { "ok" : 0.0, "errmsg" : "namespace name generated from index name \"sth_malaga.sth_/_EstacionBici:10_BikeHireDockingStation.aggr.$_id.entityId_1__id.entityType_1__id.attrName_1__id.resolution_1__id.origin_1\" is too long (127 byte max)", "code" : 67, "codeName" : "CannotCreateIndex" }. Stack trace: [com.telefonica.iot.cygnus.sinks.NGSISTHSink$STHAggregator.persist(NGSISTHSink.java:374), com.telefonica.iot.cygnus.sinks.NGSISTHSink.persistBatch(NGSISTHSink.java:108), com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:391), com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:373), org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67), org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145), java.lang.Thread.run(Thread.java:748)]

and finally with the configuration: fiware/orion:latest fiware/cygnus-ngsi:latest mongo:3.6

the result is:

time=2019-09-12T17:25:48.943Z | lvl=DEBUG | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[492] : Batch accumulation time reached, the batch will be processed as it is
time=2019-09-12T17:25:49.007Z | lvl=DEBUG | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=run | msg=com.telefonica.iot.cygnus.interceptors.NGSINameMappingsInterceptor$PeriodicalNameMappingsReader[205] : [nmi] The configuration has not changed

but it doesn't create the sth_malaga database analising mongo like this: $docker exec -it db-mongo bash

> show dbs
admin         0.000GB
config        0.000GB
local         0.000GB
orion         0.000GB
orion-malaga  0.000GB
> 

As you can see I'm nearly crazy. Can you suggest the best cygnus,orion and mongo version to use?

version: "3.5"
services:
  # Orion es el context broker
  orion:
    image: fiware/orion:latest
    hostname: orion
    container_name: fiware-orion
    depends_on:
      - mongo-db
    networks:
      - default
    expose:
      - "1026"
    ports:
      - "1026:1026"
    command: -dbhost mongo-db -logLevel DEBUG
    healthcheck:
      test: curl --fail -s http://orion:1026/version || exit 1

  # Configurando Cygnus para que almacene las actualizaciones que consultara STH-Comet
  cygnus:
    image: fiware/cygnus-ngsi:latest
    hostname: cygnus
    container_name: fiware-cygnus
    depends_on:
      - mongo-db-cygnus
    networks:
      - default
    expose:
      - "5050"
      - "5080"
    ports:
      - "5050:5050"
      - "5080:5080"
    environment:
      - "CYGNUS_MONGO_HOSTS=mongo-db-cygnus:27017" # servidor donde se hará la persistencia de datos
      - "CYGNUS_LOG_LEVEL=DEBUG" # Nivel de log para Cygnus
      - "CYGNUS_SERVICE_PORT=5050" # Puerto de Cynus en el que escucha las actualizaciones
      - "CYGNUS_API_PORT=5080" # Puerto de Cygnus para operacion
    healthcheck:
      test: curl --fail -s http://localhost:5080/v1/version || exit 1

  # STH-Comet consumira los datos almacenados en Mongo DB para el historico a corto plazo
  sth-comet:
    image: fiware/sth-comet:latest
    hostname: sth-comet
    container_name: fiware-sth-comet
    depends_on:
      - cygnus
      - mongo-db-cygnus
    networks:
      - default
    ports:
      - "8666:8666"
    environment:
      - STH_HOST=0.0.0.0
      - STH_PORT=8666
      - DB_PREFIX=sth_
      - DB_URI=mongo-db-cygnus:27017
      - LOGOPS_LEVEL=DEBUG
    healthcheck:
      test: curl --fail -s http://localhost:8666/version || exit 1

  # Database orion
  mongo-db:
    image: mongo:3.6
    hostname: mongo-db
    container_name: db-mongo
    expose:
      - "27017"
    ports:
      - "27017:27017"
    networks:
      - default
    command: --bind_ip_all --smallfiles
    volumes:
      - mongo-db:/data

# Database cygnus
  mongo-db-cygnus:
    image: mongo:latest
    hostname: mongo-db-cygnus
    container_name: db-mongo-cygnus
    expose:
      - "27018"
    ports:
      - "27018:27017"
    networks:
      - default
    command: --bind_ip_all 
    volumes:
      - mongo-db-cygnus:/data

networks:
  default:
    ipam:
      config:
        - subnet: 172.18.1.0/24

volumes:
  mongo-db: ~
  mongo-db-cygnus: ~

I have tried this. But in this case, in the second database cygnus doesn't write anything. Only it works as before (it updates only some entities) if I change the cygnus version to( image: fiware/cygnus-ngsi:1.14.0). This means that using the two versions of the database doesn't give any improvement.

1 Answers1

2

The root cause of this problem is outside Cygnus itself. It is in the index length limit that MongoDB prior to version 4.2 has.

Depending Cygnus version, it deals with the problem in a different way:

  • Version prior to 1.14.0 throws an exception that interrupts the data persist operation and prints an ugly Java stack trace in the logs. I understand this is your case.
  • Version 1.14.0 and beyond deals correctly the error situation, so the index is not created (a warn trace is printed in the logs about it) but the data is persisted. So in this case, Cygnus does its work although you may experience slower queries accessing data if you have a large amount of it.

The best solution is to upgrade MongoDB to 4.2, which should remove completely the problem. But in that case you should take into account two things:

  • MongoDB 4.2 is not yet officially supported by Cygnus, although user reports are positive.
  • I don't know if Orion Context Broker will work with MongoDB 4.2. I don't know about any positive or negative reports, so my suggestion is you to test it :). In the worst case, you could use two separate MongoDB instances (4.2 for Cygnus and 3.6 for Orion).
fgalan
  • 11,732
  • 9
  • 46
  • 89
  • Thank you. Very useful your comments. I will try Mongo 4.2 – David Bueno Vallejo Sep 17 '19 at 08:34
  • Tested with the configuration: fiware-orion:latest fiware-cygnus:latest and mongo-db:latest and Orion doesn't start correctly ;-( – David Bueno Vallejo Sep 17 '19 at 16:45
  • I'd suggest to deal with Orion issue separately. In that sense, could you open an issue about it in Orion repository at https://github.com/telefonicaid/fiware-orion/issues/new including the error that Orion prints and all other relevant information there, please? In the meanwhile, you can apply the solution I told in my answer: "In the worst case, you could use two separate MongoDB instances (4.2 for Cygnus and 3.6 for Orion)". Not ideal, but in a docker-based setup it is relativelly easy to have two MongoDB instances ;) – fgalan Sep 17 '19 at 21:27
  • I tried your suggestion with the last dockercompose.yml that I put at the end of the question but the mongo-db-cygnus is not updated by cygnus :-( – David Bueno Vallejo Sep 19 '19 at 16:58