My site - Latest question feedhttps://ask.fiware.org/questions/Open source question and answer forum written in Python and DjangoenCopyright Askbot, 2010-2011.Thu, 10 Jan 2019 07:16:59 +0100Fiware deployment in kuberneteshttps://ask.fiware.org/question/1057/fiware-deployment-in-kubernetes/ Dear fiware support team
I want to deploy fiware orion,idas and cygnus in kubernetes for scaling the componentsashok kurukundiThu, 10 Jan 2019 07:16:59 +0100https://ask.fiware.org/question/1057/Benefits from Fiware membershiphttps://ask.fiware.org/question/1028/benefits-from-fiware-membership/Good afternoon, please explain to me what is the difference between each membership level starting from the price, benefits and number of usersSabanaWed, 12 Sep 2018 11:22:22 +0200https://ask.fiware.org/question/1028/MongoDB-Cygnus Persistencehttps://ask.fiware.org/question/1051/mongodb-cygnus-persistence/Using a sensor device (LWM2M) to receive context for historic persistence, all services run using docker-compose. I added Cygnus and set-up all docker-compose definitions. Checking up with Mongodb, device table not created, here the only db have:
$mongo
> show dbs
admin 0.000GB
local 0.000GB
lwm2miotagent 0.000GB
orion-smartgondor 0.000GB
sth_smartgondor 0.000GB
> use sth_smartgondor
switched to db sth_smartgondor
> show collections
sth_/gardens_raspiSensorTV:Device_Device.aggr
I expect to see **sth_/gardens_raspiSensorTV:Device_Device** for device table, but not created.
I am not sure if the IDAS agent is actually receiving data from device, or if received may not be forwarded to Orion. Can someone please help how I can achive my goal, below are the IDAS agent and Orion logs please:
time=2018-12-07T11:56:58.136Z | lvl=DEBUG | corr=n/a | trans=n/a | op=LWM2MLib.COAPRouter | msg=Handling request with method [POST] on url [/rd/1] with messageId [42655]
time=2018-12-07T11:56:58.137Z | lvl=DEBUG | corr=n/a | trans=n/a | op=LWM2MLib.UpdateRegistration | msg=Handling update registration request
time=2018-12-07T11:56:58.137Z | lvl=DEBUG | corr=n/a | trans=n/a | op=LWM2MLib.COAPUtils | msg=Extracting query parameters from request
time=2018-12-07T11:56:58.138Z | lvl=DEBUG | corr=n/a | trans=n/a | op=LWM2MLib.UpdateRegistration | msg=Updating device register with lifetime [undefined] and address [193.136.33.222].
{"op":"IOTAgent.LWM2MHandlers","time":"2018-12-07T11:56:58.138Z","lvl":"DEBUG","msg":"Handling update registration of the device"}
time=2018-12-07T11:56:58.140Z | lvl=DEBUG | corr=715bff5c-5c01-4295-9881-c32cdd193cad | trans=715bff5c-5c01-4295-9881-c32cdd193cad | op=IoTAgentNGSI.MongoDBGroupRegister | srv=n/a | subsrv=n/a | msg=Looking for group params ["resource","apikey"] with queryObj {} | comp=IoTAgent
time=2018-12-07T11:56:58.145Z | lvl=DEBUG | corr=715bff5c-5c01-4295-9881-c32cdd193cad | trans=715bff5c-5c01-4295-9881-c32cdd193cad | op=IoTAgentNGSI.MongoDBGroupRegister | srv=n/a | subsrv=n/a | msg=Device group for fields [["resource","apikey"]] not found: [{}] | comp=IoTAgent
time=2018-12-07T11:56:58.146Z | lvl=ERROR | corr=715bff5c-5c01-4295-9881-c32cdd193cad | trans=715bff5c-5c01-4295-9881-c32cdd193cad | op=IoTAgentNGSI.Alarms | srv=n/a | subsrv=n/a | msg=Raising [MONGO-ALARM]: {"name":"DEVICE_GROUP_NOT_FOUND","message":"Couldn\t find device group","code":404} | comp=IoTAgent
time=2018-12-07T11:56:58.147Z | lvl=DEBUG | corr=715bff5c-5c01-4295-9881-c32cdd193cad | trans=715bff5c-5c01-4295-9881-c32cdd193cad | op=IoTAgentNGSI.MongoDBDeviceRegister | srv=n/a | subsrv=n/a | msg=Looking for device with id [raspiSensorTV]. | comp=IoTAgent
time=2018-12-07T11:56:58.152Z | lvl=ERROR | corr=715bff5c-5c01-4295-9881-c32cdd193cad | trans=715bff5c-5c01-4295-9881-c32cdd193cad | op=IoTAgentNGSI.Alarms | srv=n/a | subsrv=n/a | msg=Releasing [MONGO-ALARM] | comp=IoTAgent
{"op":"IOTAgent.LWM2MHandlers","time":"2018-12-07T11:56:58.153Z","lvl":"DEBUG","msg":"Preregistered device found."}
time=2018-12-07T11:56:58.153Z | lvl=DEBUG | corr=n/a | trans=n/a | op=LWM2MLib.UpdateRegistration | msg=Update registration request ended successfully
{"time":"2018-12-07T11:56:58.204Z","lvl":"DEBUG","msg":"Observers created successfully."}
Looking like measures sent above, but nothing with mongodb. Also don'e understand the "DEVICE_GROUP_NOT_FOUND","message":"Couldn\t find device group","code":404} error. I would appreciate step-by-step process, please.arilwanMon, 10 Dec 2018 20:59:33 +0100https://ask.fiware.org/question/1051/COSMOS setup requirementshttps://ask.fiware.org/question/1050/cosmos-setup-requirements/I have installed HDFS cluster and Hive on top of it .But i am not clear that what next i have to install like tidoop,cosmos-gui etc for cosmos-fiware setup.
Please any1 guide me on this.
daminiThu, 06 Dec 2018 07:36:54 +0100https://ask.fiware.org/question/1050/Fiware slack - communityhttps://ask.fiware.org/question/1047/fiware-slack-community/Hi, I'm diving into Fiware and would like to know where the community is hanging out.
Is this the place to exchange thoughts (ie. technical discussions, architecture discussions, business initiatives, etc)?
Is there a slack or discord channel? (this has been the trend replacing the old forums)
Also, is this site connected to stackoverflow (I see there are fiware questions in both sites, perhaps it'd be good to unify).
Thanks in advance.Rafael SistoWed, 28 Nov 2018 12:45:03 +0100https://ask.fiware.org/question/1047/Get token for object storagehttps://ask.fiware.org/question/1045/get-token-for-object-storage/ Dear all,
I cannot use the POST call to http://cloud.lab.fiware.org:4730/v2.0/tokens with payload:
{"auth":
{"passwordCredentials":
{"username":"my-email@email.com",
"password":"yourpassword"},
"tenantId":
"a121dfc9d22347ebb07eb89cc3c0e79f"
}
}
to get the token for Object Storage.
Please, could you tell me how can I solve it?
Thanks a lot
Regards
Pasquale pasquy73Sat, 03 Nov 2018 09:23:59 +0100https://ask.fiware.org/question/1045/IDAS scalabilityhttps://ask.fiware.org/question/1036/idas-scalability/ I have simulated million devices,i want to create and update the context using million devices. IDAS crashes when it receives data from million devices.ashok kurukundiMon, 08 Oct 2018 19:11:18 +0200https://ask.fiware.org/question/1036/Can multiple fiware orion work with same mongodb backend?https://ask.fiware.org/question/1040/can-multiple-fiware-orion-work-with-same-mongodb-backend/ I have tried connecting multiple orion instances from different machines to the same mongodb backend.
The issue I have observed is that only one among the orion instances is able to process onchange subscription registered against attribute of an entity.
My question is - is it a valid practice of having multiple active instances of orion independantly handling incoming requests connected to the same mongodb backend. Does such a deployment configuration work in the same manner, in comparision to single instance of orion connected to mongodb backend.
I have tried this with orion version 0.23 and mongodb version 3.4
Fiware userSat, 20 Oct 2018 04:56:38 +0200https://ask.fiware.org/question/1040/Multiple orion with single mongodb instance fails to process subscriptionshttps://ask.fiware.org/question/1039/multiple-orion-with-single-mongodb-instance-fails-to-process-subscriptions/ I am trying to connect multiple orion instances to the same mongodb backend. While I am able to recieve context data from South bound iot devices into orion, only one among the orion instances is processing the onchange subscription request registered for onchange of context data of an entity attribute.
Is it a valid practice to have multiple orion instances connect to the same mongodb backend and if so, if 2 different orion instances recieve context attribute data of the same entity will it trigger pre-registered onchange subscription.
Do I need to configure something in orion or Mongo for this to work
I am using orion 0.23 and Mongo 3.4Fiware userThu, 18 Oct 2018 11:43:14 +0200https://ask.fiware.org/question/1039/Error at Cygnus when receives a Notification from "Orion context Broker" : 'fiware-servicepath' header value does not match the number of notified context responseshttps://ask.fiware.org/question/825/error-at-cygnus-when-receives-a-notification-from-orion-context-broker-fiware-servicepath-header-value-does-not-match-the-number-of-notified-context/ I have created a Subscription at Orion Context Broker in order to send the data to Cygnus for persist the data in mongodb. When Orion receives and new event value send the notification to Cygnus and I get the following log info at console.
time=2017-08-31T10:10:56.130Z | lvl=INFO | corr=ad9caef6-8e34-11e7-885a-fa163e0d608a | trans=c08becae-0e14-4859-940c-32558dfec7f3 | srv=default | subsrv=/cygnusservicepath | comp=cygnusagent | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[286] : [NGSIRestHandler] Starting internal transaction (c08becae-0e14-4859-940c-32558dfec7f3)
time=2017-08-31T10:10:56.141Z | lvl=INFO | corr=ad9caef6-8e34-11e7-885a-fa163e0d608a | trans=c08becae-0e14-4859-940c-32558dfec7f3 | srv=default | subsrv=/cygnusservicepath | comp=cygnusagent | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[304] : [NGSIRestHandler] Received data ({"subscriptionId":"xxxxxxxxx","data":[{"id":"id1","type":"type1,....)
time=2017-08-31T10:10:56.239Z | lvl=WARN | corr=ad9caef6-8e34-11e7-885a-fa163e0d608a | trans=c08becae-0e14-4859-940c-32558dfec7f3 | srv=default | subsrv=/cygnusservicepath | comp=cygnusagent | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[324] : [NGSIRestHandler] Bad HTTP notification ('fiware-servicepath' header value does not match the number of notified context responses
time=2017-08-31T10:10:56.240Z | lvl=WARN | corr=ad9caef6-8e34-11e7-885a-fa163e0d608a | trans=c08becae-0e14-4859-940c-32558dfec7f3 | srv=default | subsrv=/cygnusservicepath | comp=cygnusagent | op=doPost | msg=org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet[186] : Received bad request from client.
org.apache.flume.source.http.HTTPBadRequestException: 'fiware-servicepath' header value does not match the number of notified context responses
at com.telefonica.iot.cygnus.handlers.NGSIRestHandler.getEvents(NGSIRestHandler.java:327)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:184)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
The header is sent from Orion Context Broker to Cygnus, show what can I do to solve the problem?
Maybe something is not properly configured?
cat /usr/cygnus/conf/agent_mongo.conf
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
cygnusagent.sources.http-source.channels = mongo-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = default
cygnusagent.sources.http-source.handler.default_service_path = /cygnusservicepath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
cygnusagent.channels.mongo-channel.type = memory
cygnusagent.channels.mongo-channel.capacity = 1000
cygnusagent.channels.mongo-channel.transactionCapacity = 100
cygnusagent.sinks.mongo-sink.channel = mongo-channel
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnusagent.sinks.mongo-sink.mongo_hosts=MONGODBIP:27017
cygnusagent.sinks.mongo-sink.mongo_username=
cygnusagent.sinks.mongo-sink.mongo_password=
Many thanks for your support
Best Regards
NachoNachoThu, 31 Aug 2017 12:46:25 +0200https://ask.fiware.org/question/825/How to authenticate a user with keyrock installed in own computer?https://ask.fiware.org/question/810/how-to-authenticate-a-user-with-keyrock-installed-in-own-computer/ Hello, I beginner in fiware and keyrock.
I guess that this question will be a little...bad
How can I autheticate a user with keyrock installed in my computer, because I don't understand how can make a bit proof about keyrock.
I don't explain well but... I have done the instalation of keyrock in my computer and of horizon but the last I don't know if it's necessary. After that, I don't know what I can do.
Please I need help.
Thank you so much.
Best regards.
MAGFTue, 18 Jul 2017 15:20:25 +0200https://ask.fiware.org/question/810/IDAS and IOT broker differenceshttps://ask.fiware.org/question/1033/idas-and-iot-broker-differences/ If IDAS can send the southbond traffic to context broker, why iot broker is required ??ashok kurukundiThu, 04 Oct 2018 12:00:43 +0200https://ask.fiware.org/question/1033/Biz-ecosystem, IdM authenticationhttps://ask.fiware.org/question/1031/biz-ecosystem-idm-authentication/Hello,
I'm having trouble when running local instances of the Business API Ecosystem with the IdM through the docker installations.
My trouble lies within the callback when authenticating new users in my IdM application. I've set it up so that both the application in the IdM and the config.js-file for the BAE have the callback url localhost:8004/auth/fiware/callback which works with the lab IdM, but not with my local installation.
Could there be a problem with the versions I'm running? I'm running 6.4.0 on the BAE and 7.0.2 on the IdM.
Thank you in advance for any responses.
Carl CtideliusMon, 17 Sep 2018 14:24:51 +0200https://ask.fiware.org/question/1031/Create historic graph using Orion Context Broker data in wirecloud or other dashboardhttps://ask.fiware.org/question/1012/create-historic-graph-using-orion-context-broker-data-in-wirecloud-or-other-dashboard/I am developing an application using Fiware GEs like Orion context broker, IoT Agent, Cygnus and mySQL and I would like to visualize the data that I receive from Orion and create a historic graph.
The application is fully integrated locally using docker-compose and the Orion receives measurements from my sensor.
I've tried to use wirecloud installed locally on my pc but I didn't manage to find the correct widgets to create the graph.
Can someone suggest the right way to make the configurations to display the graph in wirecloud or suggest another dashboard that I could use? maria13Tue, 19 Jun 2018 15:42:57 +0200https://ask.fiware.org/question/1012/What is the current state of FIWARE IoT Edge GEs? particularly,the Protocol Adaptor and Gateway Logic GEs?https://ask.fiware.org/question/770/what-is-the-current-state-of-fiware-iot-edge-ges-particularlythe-protocol-adaptor-and-gateway-logic-ges/I wanted to check on the curent status of the FIWARE IoT Edge GEs. They still appear as part of the FIWARE IoT architecture here: https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Internet_of_Things_(IoT)_Services_Enablement_Architecture, but not actively on the FIWARE catalogue.
I understand the Protocol Adaptor is deprecated, based on what I see in the catalogue.
Regarding the Gateway Logic GE, it seems it was never implemented even though the specification for it appears in an older release under the name of IoT.Gateway.DeviceManagement: https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/FIWARE.ArchitectureDescription.IoT.Gateway.DeviceManagement
Does it mean these two GEs are completely out of the picture and IDEC is the only active edge component in FIWARE? Or are there still plans to implement the Gateway Logic and Protocol Adaptor GEs in next releases?
Many thanks in advance for clarifying.
Kind regards,
ilknurichulaniThu, 02 Mar 2017 17:47:39 +0100https://ask.fiware.org/question/770/REST API URL to subscribe orion from Cepheus cephttps://ask.fiware.org/question/1026/rest-api-url-to-subscribe-orion-from-cepheus-cep/An entity is created at orion and we want to subscribe it from cepheus cep.
i am using /v1/subscribeContext url for subscription. Body of this POST method url is below
{
"entities": [
{
"type": "Room",
"isPattern": "false",
"id": "Room1"
}
],
"attributes": [
"pressure","temperature"
],
"reference": "http://localhost:8080",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"pressure","temperature"
]
}
],
"throttling": "PT5S"
}
Now in reference tag i have to give URL where this subscribed value will be published. If i give simple 'localhost:8080' then it will not work. i have to give a specific URL. Can anyone please let me know what may the URL of CEP where orion will publish.ramanThu, 09 Aug 2018 12:51:20 +0200https://ask.fiware.org/question/1026/perseo-corehttps://ask.fiware.org/question/1011/perseo-core/Hi,
I'm trying to install perseo following the guide from https://github.com/telefonicaid/perseo-core/blob/master/documentation/deployment.md
I guess that I have to install perseo-core first and after I have to install perseo-fe. When I'm trying to deploy perseo-core, I'm getting some errors (probably the issue is on my side).
When I try `docker build -t perseo .`, after few warnings, it seems it hangs at some point:
```
http://mirror.uv.es/mirror/CentOS/7.5.1804/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirror.uv.es/mirror/CentOS/7.5.1804/os/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
ftp://ftp.cesca.cat/centos/7.5.1804/extras/x86_64/repodata/repomd.xml: [Errno 12] Timeout on ftp://ftp.cesca.cat/centos/7.5.1804/extras/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
http://ftp.cica.es/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://ftp.cica.es/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
http://mirror.airenetworks.es/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirror.airenetworks.es/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
http://centos.uvigo.es/7.5.1804/extras/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://centos.uvigo.es/7.5.1804/extras/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
http://ftp.uma.es/mirror/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://ftp.uma.es/mirror/CentOS/7.5.1804/extras/x86_64/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
```
If I try to install it from rpm, I get the next errors:
```RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.1yMNCh (%prep)
[centos@digitanimal-fiware-test-2018 rpm]$ ^C
[centos@digitanimal-fiware-test-2018 rpm]$ sudo ./create-rpm.sh 1 0.1
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.iyb82M
+ umask 022
+ cd /home/centos/perseo-core/rpm/BUILD
+ echo '[INFO] Preparing installation'
[INFO] Preparing installation
+ rm -Rf /home/centos/perseo-core/rpm/BUILDROOT/perseo-cep-core-0.1-1.x86_64
+ mkdir -p /home/centos/perseo-core/rpm/BUILDROOT/perseo-cep-core-0.1-1.x86_64
+ '[' -d /home/centos/perseo-core/rpm/BUILDROOT/perseo-cep-core-0.1-1.x86_64/usr/share/tomcat/webapps ']'
+ mkdir -p /home/centos/perseo-core/rpm/BUILDROOT/perseo-cep-core-0.1-1.x86_64/usr/share/tomcat/webapps
+ cp -ax /home/centos/perseo-core/rpm/../target/perseo-core-0.1.war /home/centos/perseo-core/rpm/BUILDROOT/perseo-cep-core-0.1-1.x86_64/usr/share/tomcat/webapps/perseo-core.war
cp: cannot stat '/home/centos/perseo-core/rpm/../target/perseo-core-0.1.war': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.iyb82M (%prep)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.iyb82M (%prep)
```
Any help? Also, if there is any additional information from perseo, could you share the links? I'm using the doc coming from http://fiware-iot-stack.readthedocs.io/en/latest/cep/index.html and from github
ThanksIgnacioTue, 19 Jun 2018 09:19:18 +0200https://ask.fiware.org/question/1011/Provisioning and autoprovisioninghttps://ask.fiware.org/question/968/provisioning-and-autoprovisioning/I'm using IoT Agent and I'm reading that there are two types of provisioning: provisioning and autoprovisioning (or preprovisioning).
I tried the "provisioning" simply using the curl command (i.e. curl localhost:4041/iot/devices ...) but I don't understand preprovisioning.
Are there differents or are they the same?
If not, could you give me some info or real example?
Thanks a lotpasquy73Fri, 16 Feb 2018 11:57:22 +0100https://ask.fiware.org/question/968/Provisionning lorawan devices to a fiware instancehttps://ask.fiware.org/question/1021/provisionning-lorawan-devices-to-a-fiware-instance/ I followed this link https://github.com/Atos-Research-and-Innovation/IoTagent-LoRaWAN/blob/master/docs/users_manual.md
so That I'd be able to provision my Lorawan sensors to my fiware instance, I of course changed the Ids so that they match my sensors but the provisionning didn't work. Can you please help me do so? (I have my lorawan gateway and application stored in the plateform TTN and I just want the data sent by sensors to be processed by my fiware instance)chaimaMon, 16 Jul 2018 09:44:24 +0200https://ask.fiware.org/question/1021/Connecting Cepheus to Orionhttps://ask.fiware.org/question/1000/connecting-cepheus-to-orion/Dear Fiware Support Team,
I am having the following issue:
I want to connect Cepheus to Orion (Orion as a source of events), process some queries and return new events to Orion (this time as destination). Like depicted below:
Diagram: https://imgur.com/a/CI43Qis
I have tried to configure Cepheus towards this goal, with the following example configuration:
{
"host": "http://localhost:8080",
"in": [
{
"type": "PersonDetection",
"id": "PersonDetection",
"providers": [
{
"url": "http://orion:1026",
"serviceName": "==SERVICE_NAME==",
"servicePath": "==SERVICE_PATH=="
}
],
"attributes": [
{
"name": "tagId",
"type": "Integer"
},
{
"name": "sectorId",
"type": "Integer"
},
{
"name": "positionX",
"type": "Float"
},
{
"name": "positionY",
"type": "Float"
},
{
"name": "ts",
"type": "Timestamp"
}
]
}
],
"out": [
{
"id": "CellDetection",
"type": "CellDetection",
"attributes": [
{
"name": "cellId",
"type": "Integer"
},
{
"name": "tagId",
"type": "Integer"
},
{
"name": "ts",
"type": "Timestamp"
}
]
}
],
"brokers": [
{
"url": "http://orion:1026",
"serviceName": "==SERVICE_NAME==",
"servicePath": "==SERVICE_PATH=="
}
],
"statements": [
]
}
With this configuration I want Cepheus to receive "PersonDetection" events (then run some query) and emit "CellDetection" events.
However when Orion emits "PersonDetection" events, I see no logs in the Cepheus console. Which probably means that he's not receiving any events, maybe because he did not subscribe to Orion properly.
The only logs generated by Cepheus are the following:
epheus_1 | /usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.cepheus_1 | 'Supervisord is running as root and it is searching '
cepheus_1 | 2018-06-05 18:19:12,362 CRIT Supervisor running as root (no user in config file)cepheus_1 | 2018-06-05 18:19:12,362 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsingcepheus_1 | 2018-06-05 18:19:12,378 INFO RPC interface 'supervisor' initialized
cepheus_1 | 2018-06-05 18:19:12,378 CRIT Server 'unix_http_server' running without any HTTP authentication checking
cepheus_1 | 2018-06-05 18:19:12,378 INFO supervisord started with pid 1
cepheus_1 | 2018-06-05 18:19:13,381 INFO spawned: 'broker' with pid 9
cepheus_1 | 2018-06-05 18:19:13,384 INFO spawned: 'cep' with pid 10
cepheus_1 | 2018-06-05 18:19:14,387 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
cepheus_1 | 2018-06-05 18:19:14,387 INFO success: cep entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
cepheus_1 | 2018-06-05 18:19:17.026 INFO 9 --- [ main] com.orange.cepheus.broker.Application : Starting Application on e62b33e4a5c1 with PID 9 (/opt/cepheus/cepheus-broker
.jar started by root in /opt/cepheus)
cepheus_1 | 2018-06-05 18:19:17.162 INFO 10 --- [ main] com.orange.cepheus.cep.Application : Starting Application on e62b33e4a5c1 with PID 10 (/opt/cepheus/cepheus-cep.
jar started by root in /opt/cepheus)
cepheus_1 | 2018-06-05 18:19:37.016 INFO 9 --- [ main] com.orange.cepheus.broker.Application : Started Application in 20.785 seconds (JVM running for 23.597)
cepheus_1 | 2018-06-05 18:19:37.639 INFO 10 --- [ main] com.orange.cepheus.cep.Application : Started Application in 21.395 seconds (JVM running for 24.211)
cepheus_1 | 2018-06-05 18:20:30.115 INFO 10 --- [nio-8080-exec-1] c.o.c.cep.controller.AdminController : Update configuration
cepheus_1 | 2018-06-05 18:20:30.116 INFO 10 --- [nio-8080-exec-1] c.o.cepheus.cep.EsperEventProcessor : Apply configuration
cepheus_1 | 2018-06-05 18:20:30.116 INFO 10 --- [nio-8080-exec-1] c.o.cepheus.cep.EsperEventProcessor : Add new event type: EventType{id='PersonDetection', type='PersonDetection',
isPattern=false, attributes=[Attribute{name='sectorId', type='Integer', metadata=[], jsonpath='null'}, Attribute{name='ts', type='Timestamp', metadata=[], jsonpath='null'}, Attribute{name='
positionX', type='Float', metadata=[], jsonpath='null'}, Attribute{name='positionY', type='Float', metadata=[], jsonpath='null'}, Attribute{name='tagId', type='Integer', metadata=[], jsonpat
h='null'}]}
cepheus_1 | 2018-06-05 18:20:30.190 INFO 10 --- [nio-8080-exec-1] c.o.cepheus.cep.EsperEventProcessor : Add new event type: EventType{id='CellDetection', type='CellDetection', isP
attern=false, attributes=[Attribute{name='ts', type='Timestamp', metadata=[], jsonpath='null'}, Attribute{name='cellId', type='Integer', metadata=[], jsonpath='null'}, Attribute{name='tagId'
, type='Integer', metadata=[], jsonpath='null'}]}
cepheus_1 | 2018-06-05 18:20:30.220 INFO 10 --- [nio-8080-exec-1] c.o.c.cep.persistence.JsonPersistence : Save configuration in /tmp/cep-default-.json
cepheus_1 | 2018-06-05 18:20:30.233 INFO 10 --- [taskScheduler-1] c.o.cepheus.cep.SubscriptionManager : Launch of the periodic subscription task at 2018-06-05T18:20:30.229Z
What am I missing?
Thanks!
PS. ==SERVICE_PATH== and ==SERVICE_NAME== are just place-holders for the real path and name.Pedro D.Tue, 05 Jun 2018 20:21:17 +0200https://ask.fiware.org/question/1000/Strange Comet issueshttps://ask.fiware.org/question/1002/strange-comet-issues/Dear Fiware,
I am trying to connect STH Comet to Orion in order to aggregate some events.
However I am having some errors:
In order to reproduce this errors for you to analyze, I have created this small project that isolates Orion and STH Comet, and reproduces the error in the log messages: https://github.com/PedroD/comet_demo
When you run it, you will find log messages.
These log messages contain all commands that the coordinator app sends to Orion and STH Comet, so that you don't need to worry about the Kotlin project's source.
In sum, the issues we are having are:
1) Comet is, for some reason, overflowing like this:
```
sth_1 | time=2018-06-09T11:04:02.626Z | lvl=WARN | corr=n/a | trans=n/a | op=OPER_STH_DB_LOG | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=The size in bytes of the namespace for storing the aggregated data ("sth_sensei_service" plus "sth_/sensei,/sensei,/sensei,/sensei,/sensei,/sensei,/sensei,/sensei,/sensei,/sensei_PersonDetection_PersonDetection.aggr", 138 bytes) is bigger than 120 bytes
```
2) Comet is for some reason, having issues persisting some data in mongo, despite that the coordinator only tries to register the entities once:
```
sth_1 | time=2018-06-09T11:04:12.870Z | lvl=ERROR | corr=d78056a4-6bd4-11e8-97dd-0242ac120005 | trans=745ad73e-ebd0-49a4-b843-261981c8f9b2 | op=OPER_STH_POST | from=n/a | srv=sensei_service | subsrv=/sensei | comp=STH | msg=Error when getting the raw data collection for storing:MongoError: a collection 'sth_sensei_service.sth_/sensei_PersonDetection_PersonDetection' already exists
```
3) When asked for aggregations, using the url below, Comet returns empty values:
URL: `http://sth:8666/STH/v1/contextEntities/type/PersonDetection/id/PersonDetection/attributes/positionX?aggrMethod=sum&aggrPeriod=second&dateFrom=2016-02-01T00:00:00.000Z&dateTo=2019-01-01T23:59:59.999Z`
```
demo_1 | Requesting aggregation to Comet:
demo_1 | {"contextResponses":[{"contextElement":{"attributes":[{"name":"positionX","values":[]}],"id":"PersonDetection","isPattern":false,"type":"PersonDetection"},"statusCode":{"code":"200","reasonPhrase":"OK"}}]}
demo_1 |
demo_1 | Comet seems to be sending an empty "values" array. What is going on?
demo_1 |
```
What is going on? How can we solve these issues?
Thanks!Pedro D.Sat, 09 Jun 2018 13:22:59 +0200https://ask.fiware.org/question/1002/Cygnus tutorial - not workinghttps://ask.fiware.org/question/1001/cygnus-tutorial-not-working/Dear Fiware User,
I am trying to complete the example on Cygnus available at http://fiware-cygnus.readthedocs.io/en/latest/cygnus-ngsi/quick_start_guide/index.html
I have a working Context Broker available at localhost. However, when I try to run the _notification.sh_ script written according to tutorial, this is the error that I get:
$ ./notification.sh http://localhost:5050/notify
* About to connect() to localhost port 5050 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
Can you please support me?
Thak you very muchRaffa87Wed, 06 Jun 2018 11:08:34 +0200https://ask.fiware.org/question/1001/Cost to use FiWare in my own University to implement an Application with this Technologyhttps://ask.fiware.org/question/1009/cost-to-use-fiware-in-my-own-university-to-implement-an-application-with-this-technology/ Here are some questions that I am looking for an answer:
• How much does it cost to use them?
• If I use them on their own cloud, do I have access to their system data: logs of user data, who accessed files, system configuration etc.
• Can I download them on our internal servers/nodes/cloud?
• if I set them up on our own cloud, how much would it cost?
• what are the requirements needed to set these up our own servers?
I hope be clear in my questions, thanks a lot for your answer!!
Daniel SevillaWed, 13 Jun 2018 20:13:26 +0200https://ask.fiware.org/question/1009/How to fetch data to wirecloud from orion contrext broker?https://ask.fiware.org/question/991/how-to-fetch-data-to-wirecloud-from-orion-contrext-broker/ I’m trying to use the ‘ngsi-source-operator’ to fetch data from ‘Orion context broker’ and trying to populate that data on the Map Widget.
My Orion instance is running @ http://130.206.117.237:1026 and I’m using the global instance of wirecloud.
Please suggest me, what setting should I use for NGSI-source-operator in order to fetch data.
Also while creating subscriptions, which URL should I use to send the notifications.
Another query is regarding Table Viewer widget, as it requires input points as ‘Data and Structures’, is there any available operator to fetch or create a dataset for this table viewer widget.
PC00474884Mon, 14 May 2018 13:56:52 +0200https://ask.fiware.org/question/991/Error installing fiware-pep-proxyhttps://ask.fiware.org/question/997/error-installing-fiware-pep-proxy/In the moment that I installed fiware-pep-proxy I had a problem in the step: npm install
I followed this guide:
http://fiware-pep-proxy.readthedocs.io/en/latest/admin_guide/#installation-and-administration-guide
the output is in this pastebin:
https://pastebin.com/YdvjdSWv
the npm-debug.log is in:
https://pastebin.com/Cbqwy24aroxanasbFri, 18 May 2018 03:45:06 +0200https://ask.fiware.org/question/997/REST API - Past values of an entityhttps://ask.fiware.org/question/985/rest-api-past-values-of-an-entity/What I have
--
Docker Containers:
Orion CB
MongoDB
What I do
--
I use the REST API (NGSIv2) so I:
1. Create an entity (using POST)
2. Update the values from this entity (using PUT)
3. Get the value from the entity (using other app: GET)
My question
--
Is there any way (inherent to ORION) to have the **full history of values** of this entity? Something like a **queue of messages**, or so, being them the values of this entity.
If not, which **mechanism should I use** (inherent to ORION again) to be able to build it in my second app?
I guess I expect to have a subscription mode or whatever so I get a notification of a new value (public/subs model?) still using the REST API in my second app, not needing it to have hard-coded when a new value of the entity will be available at ORION, since this may be unknown even for me.Btc SourcesTue, 24 Apr 2018 18:50:42 +0200https://ask.fiware.org/question/985/Orion Virtual Box Image Linkhttps://ask.fiware.org/question/983/orion-virtual-box-image-link/Hello i have question regarding virtual box image link for "Orion Context Broker" found here https://catalogue.fiware.org/enablers/publishsubscribe-context-broker-orion-context-broker/downloads.
Link is not working it redirect me to https://www.fiware.org/
Maybe i missed something or this is not supported etc. any info would help.
Thanks Igor.IgorMon, 23 Apr 2018 12:35:22 +0200https://ask.fiware.org/question/983/Error with Docker daemon for docker installation on Fiware cloudhttps://ask.fiware.org/question/984/error-with-docker-daemon-for-docker-installation-on-fiware-cloud/I am new with the Fiware and docker technologies so I need some help.
I am following the instructions from this link http://simple-docker-hosting-on-fiware-cloud.readthedocs.io/en/v1.0/manuals/install in order to create a docker-host machine on Fiware cloud but when I run the following command:
docker-machine create -d openstack --openstack-flavor-id="2" --openstack-image-name="base_ubuntu_14.04" --openstack-net-name="node-int-net-01" --openstack-floatingip-pool="public-ext-net-01" --openstack-sec-groups="docker-sg" --openstack-ssh-user "ubuntu" docker-host
I receive the following error:
Error creating machine: Error running provisioning: Unable to verify the Docker daemon is listening: Maximum number of retries (10) exceeded
Although, I can see the instance of the docker-host machine on Fiware cloud, but when I run the following command:
eval "$(docker-machine env docker-host)"
the following error comes up:
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "147.27.60.136:2376": dial tcp 147.27.60.136:2376: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.**
I also tried to regenerate the certificates:
docker-machine regenerate-certs docker-host
but I received the following error:
Error getting SSH command to check if the daemon is up: ssh command error:
command : sudo docker version
err : exit status 1
output : Client:
Version: 18.04.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:21:14 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?**
What am I doing wrong?
I use docker community edition for windows 10.
The docker version is:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:06:28 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:14:32 2018
OS/Arch: linux/amd64
Experimental: false
maria13Tue, 24 Apr 2018 08:59:50 +0200https://ask.fiware.org/question/984/Installing Docker on FIWARE Cloud Errorhttps://ask.fiware.org/question/986/installing-docker-on-fiware-cloud-error/ I am new with the Fiware and docker technologies so I need some help.
I am following the instructions from this link http://simple-docker-hosting-on-fiwar... in order to create a docker-host machine on Fiware cloud but when I run the following command:
docker-machine create -d openstack --openstack-flavor-id="2" --openstack-image-name="baseubuntu14.04" --openstack-net-name="node-int-net-01" --openstack-floatingip-pool="public-ext-net-01" --openstack-sec-groups="docker-sg" --openstack-ssh-user "ubuntu" docker-host
I receive the following error:
Error creating machine: Error running provisioning: Unable to verify the Docker daemon is listening: Maximum number of retries (10) exceeded
Although, I can see the instance of the docker-host machine on Fiware cloud, but when I run the following command:
eval "$(docker-machine env docker-host)"
the following error comes up:
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "147.27.60.136:2376": dial tcp 147.27.60.136:2376: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.**
I also tried to regenerate the certificates:
docker-machine regenerate-certs docker-host
but I received the following error:
Error getting SSH command to check if the daemon is up: ssh command error: command : sudo docker version err : exit status 1 output : Client: Version: 18.04.0-ce API version: 1.37 Go version: go1.9.4 Git commit: 3d479c0 Built: Tue Apr 10 18:21:14 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?**
What am I doing wrong?
I use docker community edition for windows 10.
The docker version is:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:06:28 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:14:32 2018
OS/Arch: linux/amd64
Experimental: falsemaria13Fri, 27 Apr 2018 09:20:35 +0200https://ask.fiware.org/question/986/Unable to run Cygnus with MySQL agenthttps://ask.fiware.org/question/989/unable-to-run-cygnus-with-mysql-agent/Hi All,
I am trying to setup and understand Cygnus. But I am facing issue during installation.
I followed below given steps.
1. Install Cygnus using Docker (docker run -d -p 5050:5050 -p 8081:8081 fiware/cygnus-common)
2. Executed version command (curl http://172.17.0.2:8081/v1/version) which gave following response {"success":"true","version":"1.8.0_SNAPSHOT.39b2aa4789c61fa92fe6edc905410f1ddeb33490"}
3. Login into Cygnus container using command docker exec -it /bin/bash
4. Created new file named “agent_mysql.conf” in “/opt/apache-flume/conf/” folder.
<br/>Configuration details given below
<br/><br/>
cygnus-ngsi.sources = http-source<br/>
cygnus-ngsi.sinks = mysql-sink<br/>
cygnus-ngsi.channels = mysql-channel<br/>
<br/>
cygnus-ngsi.sources.http-source.channels = mysql-channel<br/>
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource<br/>
cygnus-ngsi.sources.http-source.port = 5050<br/>
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler<br/>
cygnus-ngsi.sources.http-source.handler.notification_target = /notify<br/>
cygnus-ngsi.sources.http-source.handler.default_service = def_serv<br/>
cygnus-ngsi.sources.http-source.handler.default_service_path = def_servpath<br/>
cygnus-ngsi.sources.http-source.handler.events_ttl = 2<br/>
cygnus-ngsi.sources.http-source.interceptors = ts gi<br/>
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp<br/>
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder<br/>
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /Applications/apache-flume-1.4.0-bin/conf/grouping_rules.conf<br/>
<br/>
cygnus-ngsi.channels.mysql-channel.type = memory<br/>
cygnus-ngsi.channels.mysql-channel.capacity = 1000<br/>
cygnus-ngsi.channels.mysql-channel.transactionCapacity = 100<br/>
<br/>
<br/>
cygnus-ngsi.sinks.mysql-sink.channel = mysql-channel<br/>
cygnus-ngsi.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMySQLSink<br/>
cygnus-ngsi.sinks.mysql-sink.mysql_host = localhost<br/>
cygnus-ngsi.sinks.mysql-sink.mysql_port = 3306<br/>
cygnus-ngsi.sinks.mysql-sink.mysql_username = root<br/>
cygnus-ngsi.sinks.mysql-sink.mysql_password = <myPassword><br/>
cygnus-ngsi.sinks.mysql-sink.attr_persistence = row<br/>
<br/><br/>
5. Changed "cygnus-entrypoint.sh" file in / (root) folder and added following command by removing existing one. <br/>
${FLUME_HOME}/bin/cygnus-flume-ng agent --conf ${CYGNUS_CONF_PATH} -f ${CYGNUS_CONF_PATH}/agent_mysql.conf -n cygnus-ngsi -p ${CYGNUS_API_PORT} -Dflume.root.logger=${CYGNUS_LOG_LEVEL},${CYGNUS_LOG_APPENDER} -Dfile.encoding=UTF-8
6. Exited Docker container and came back to Ubuntu.
7. Stop and restart Docker container.
8. And I am getting following errors in logs
<br/>Please check and let me know what am I doing wrong? Appreciate your help.<br/>
#LOGS
<br/><br/>
n$AgentConfiguration[1016] : Processing:mysql-sink
time=2018-04-30T14:24:00.807Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=validateConfiguration | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: [cygnus-ngsi]
time=2018-04-30T14:24:00.808Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[150] : Creating channels
time=2018-04-30T14:24:00.816Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.channel.DefaultChannelFactory[40] : Creating instance of channel mysql-channel type memory
time=2018-04-30T14:24:00.825Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[205] : Created channel mysql-channel
time=2018-04-30T14:24:00.832Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.source.DefaultSourceFactory[39] : Creating instance of source http-source, type org.apache.flume.source.http.HTTPSource
time=2018-04-30T14:24:00.836Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=configure | msg=org.apache.flume.source.http.HTTPSource[113] : Error while configuring HTTPSource. Exception follows.
java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
time=2018-04-30T14:24:00.840Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadSources | msg=org.apache.flume.node.AbstractConfigurationProvider[366] : Source http-source has been removed due to an error during configuration
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at com.google.common.base.Throwables.propagate(Throwables.java:156)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:114)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
... 11 more
time=2018-04-30T14:24:00.841Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.sink.DefaultSinkFactory[40] : Creating instance of sink: mysql-sink, type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
time=2018-04-30T14:24:00.842Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=run | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[142] : Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load sink type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink, class: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:69)
at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67)
... 11 more
<br/><br/>
#Additional Queries
Also, it would be better if you could provide step by step guide for using Cygnus considering any example. <br/>
Actually, we have historical data and would like to show different reports using different criteria/filters (Date wise, User based, Area based etc.).<br/>
I am planning to use Cygnus with MySQL agent to store historical data. Then create new REST APIs to fetch data from MySQL based on filters and return the JSON for reports to GUI application (May be WireCloud or custom application based on AngularJS).<br/>
<br/>
Is this approach correct? Please suggest.<br/>
<br/>
Can we filter out data using Cygnus while fetching? I mean does Cygnus provide fetch APIs for stored historical data?
Or will Analytics related GE fit with above requirement?<br/>
<br/>
Kindly guide.<br/>
<br/>
Do let me know in case you need further information.<br/>
<br/>
<br/>
Regards,<br/>
Krishan<br/>babbarkrishanThu, 03 May 2018 15:32:23 +0200https://ask.fiware.org/question/989/