Hi, NB_LINE_INSERTED will work, but this has to be used post insertion to DB is completed. Check whether you are trying to print the value before it is set. Use OnComponent OK or OnSubJob ok, then print it, this will work. Thanks, Ramachandran B
... View more

Please check whether you have enough permissions to the user id via which you are connecting to sftp server. To try whether tFtpput has any issues try with a shell script to push the files too as an alternative. Thanks, Ramachandran B
... View more

Hi Team, In Talend for BigData 6.3.1version, the tSAPHanaInput the db version available is only for HDB 1.0. Now we want to connect to HDB 2.0 version. In which release does this feature is present in Talend? Thanks, Ramachandran B
... View more

Hi Mortee/sabrina, Did you found any resolution to the issue? I am also facing the same error. When i run it from studio job runs fine. But when i run it from TAC it is giving the error as "Lambda expressions are allowed only at source level 1.8 or above". Kindly comment. Thanks, Ram
... View more

Hi Please check the logs for any warnings, this could be due to some truncation of columns which is happening due to incorrect length as well eg target table has fields length as 20. And incoming record field has length as 30 ie 20+. If the incoming record is not able to get accommodated in the target tables those records will get dropped. Check this by loading it to a file, if this works fine revisit the table schema for field lengths, primary keys etc. Thanks, Ramachandran B
... View more

Hi Talend Team, Even I too have this same requirement. Would like to push data to one drive. Is there anything planned for this feature in upcoming sprints or release. Thanks, Ramachandran B
... View more

Hi, You can make use of tmssqlscd component to find inserts and updates. You have similar scd components for most of the databases. For deletions create another job and do a inner join on key. Rejects should be the deletes. Thanks, Ramachandran B
... View more

Hi, If I understand correctly, you want to load different schema to target. If you are using a licensed version there is a data type dynamic itself in tmap. You can make use of it. Thanks, Ramachandran B
... View more

Hi, As you have the date filed to identify the delta, it wil be easy. Have two files one to have the current date(when the job flow starts). Another file last run date with low date 19000101. In the first load run the job and read date from last run date file, have this as a where clause to check source data timestamp col>run date and run the job. Next move the current date file value to next run date file. In the incremental run again the same process. So you will get the delta records. Thanks, Ramachandran B
... View more

Hi sid, Try doing a left outer join with source as primary and target as the ref link based on key cols. All the null values will be as insert. And others wil be an update. Nite ref should be a full file. Thanks, Ram
... View more

Hi, Please try adding one more component tmap in between both the connectors and for that column alone check for empty string if so then set null, something like Row1. Field==''? null:row1.field. Let me know if above works. Thanks, Ram
... View more

Hi Rohit, The nb_line gives the total number of rows processed in a link. We can use this global variable assigned to a output field in tmap. Nb_line_updated/inserted global variables can only be used to determine the number of rows inserted into a target dB. But we can only use this if the row counts expected within 10000. If the count exceeds then the counter is reset to 0. So if you are inserting 10002 records into target then this variable will give you a value of 2 which is incorrect. So use some other strategy to get the total number of records inserted or updated into target. Please let me know if you don't get it. Thanks, Ram
... View more

In tMap use like below. Try either in variables or in the output column derivation. DateCol==null?TalendDate.parseDate("yyyy-MM-dd HH:mm:ss", TalendDate.getDate("yyyy-MM-dd HH:mm:ss")):DateCol OR DateCol==""?TalendDate.parseDate("yyyy-MM-dd HH:mm:ss", TalendDate.getDate("yyyy-MM-dd HH:mm:ss")):DateCol Thanks
... View more

Hello Everyone, I am using Talend 6.3.1 version. I am able to successfully connect to the database with required connection and run the jobs.Guess query is also working. But when I click the Guess Schema option it gives error as What step has to be taken to make the Guess Schema work?
... View more

Sorry, missed changing the Batch size, now it works as you said. But if this is the case how can we get the number of rows updated at the end of the job? If the case is to get the count of rows updated and the updated rows are in lakhs/millions then use of this variable will not give us the correct result. Is this not an enhancement which can be done in order to fetch all the count in one run of the job instead of getting the count in batch? Talend Team - Please comment.
... View more

Thanks Arvin for your reply. It seems logical which you said, I tried increasing the commit changes to '20000' but still, the row count comes as 0. Maybe the maximum value for committing is 10000 only. Please comment.
... View more

Hi Everyone, I am trying to pull the row counts of a job in the sub job as a variable in tStatcatcher. The count of inserted records in a variable(tMap) using "((Integer)globalMap.get("tMSSqlOutput_1_NB_LINE_INSERTED")) " it work fine and gives the result when the count of rows processed is under 9999. When it reaches 10000 this returns 0. Also if I use ((Integer)globalMap.get("tMSSqlOutput_1_NB_LINE")) it gives me correct result for the rows>=10000 as well. Is this a known issue? I am using both the variable only in a sub job to get the row count from the previous job. Kindly share your thoughts on these global variables. Thanks, Ram
... View more