cancel
Showing results for 
Search instead for 
Did you mean: 

Job doesn't stop

Former Member
0 Kudos
824

Hello everyone,

I'm using BODS12.2 and I got an error.

In fact on the Management Console, one of my job doesn't finish. There is no end time and no RC. I don't know if the job is failed or successful.

This job loads a Sybase IQ table and by the monitor I know that rows had been updated. It looks like the job is finished because not doing anything anymore but process is still running.

Any information about this case ?

Thank you

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Romuald,

Can you confirm that splitting the dataflows resolved the issue ?

I have found as well that a bug has been fixed in version 12.2.2.1.Therefore i suggest to upgrade from 12.20 to 12.2.2.1

ADAPT01343933

If a datalfow contains a splitter (i.e. a transform that has multiple outputs)

and any of the splitter's outputs participates in a join as the inner loop, the

splitter may fill out its output buffer to the inner loop of the join before the inner

loop of the join gets started to consume its input data. The dataflow may

hang. This issue has been resolved.

Thanks

Answers (2)

Answers (2)

Former Member
0 Kudos

Can you simplify your job as much as possible even if you need to increase dramatically the number of dataflow and use the template table as staging table .This may help you to find out there the process gets stuck or maybe to resolve the issue .

Try to set 'Trace transform' ON .You might fin out that something is wrong with a transforms

werner_daehn
Active Contributor
0 Kudos

I would simply check if an al_engine process is still running and kill it.

Obviously only kill it if you are sure no batch job and no Realtime Service is running at that time.

Former Member
0 Kudos

Hi,

Thank you for your answer.

Yes the process al_engine is still running.

I can kill it that's right but my question is why this job doesn't stop ? Do you have any informations or well known bug about htis case ?

Many thanks

Former Member
0 Kudos

are you using bulk load to load to Sybase IQ table or regular load ?

what is the last line you see in the trace log ? is the Dataflow completed Successfully ?

the job is hanging, check if there are any database locks in the target table database or check for any deadlock

Former Member
0 Kudos

Hi Manoj,

We are using bulkload to load this table. and job is still running :

(12.2) 07-08-10 09:57:53 (1089694:0001) DATAFLOW: Data flow <DF_---> is started.

(12.2) 07-08-10 09:57:53 (1089694:0001) DATAFLOW: Data flow <DF_---> using PAGEABLE Cache with <3578 MB> buffer pool.

Former Member
0 Kudos

can you check if something is blocking from the Database server side ? check if there are any locks etc

may be the job is waiting for the response from Database

what is the bulk load option that you are using ? file or named pipe

is the IQ Server on different machine than jobServer ?

can you enable the trace for the bulk loader, it will print the LOAD TABLE command that its using, take that command and run that from outside DI and see if that return control back ?

Former Member
0 Kudos

Hi,

Thank you for your answer.

In fact my job contents 3 dataflows : delete, create and last one is an update (Sybase IQ table)

The problem is the update, the dataflows starts and then nothing. It doesn't work, job on console web is still running, process on unix too but nothing happens.

In your previous msg, when you told about database, are you talking about the BODS database or the sybase IQ DB that I want to update ?

If BODS DB, any table in particular ?

Many thanks

Romuald

Former Member
0 Kudos

are there Dataflows running in parallel or are sequential (one after the other) ?

check for the locks on Target Datastore Database (Sybase IQ)

what happens if you run only this dataflow ? does the job hang in that case also ?

are you familiar with debugging tools like dbx or gdb on Unix ? if yes then try attaching to the al_engine Dataflow process that is hanging and post the stack trace, it may give some hints where its hanging or you can file a case with support

Former Member
0 Kudos

Hi Manoj,

Dataflows are running in sequential and there is nothing locked on Sybase IQ DB.

We tried to run dataflow by dataflow and still the same problem.

We tried to split the last dataflow (a huge update) and it works now.

So we have a work around but not a solution.

Last dataflow read millions of rows, join and an update. It looks like it is difficult for BODI to read and join millions of rows. I tried to run the query ad-hoc method and no problem ...

Do you if BODI has a limit or if there is parameter about number or rows or something like that ?

Thank you for your help

Romuald

Former Member
0 Kudos

I don't think there is a limit on number of rows DI can process, since you mentioned you are joining with other table and there are milions of records one possiblity is join might be getting processed in memory, DI is reading all the miliion records in memory and doing the join and then update

check the monitor log from management console or on designer, do you see a transfrom name MemoryCache ...

for each transform how any rows do you see ?

0 Kudos

1. updates will be slow and check Does any table being used as source/lookup and as target?

2. check whether it is indexed properly and

3. Trace states that it uses something like 3.5 GB of pageable cache, what sort if transformations are you using to update the records? Are there any joins, are tables from same datastore? Push-down is being implemented properly?

4. as you said you are dealing with heavy updates, because of which entire network band is taken or something like

log files are filled up and because of this the correspondance between Job Server and Local Repo DB is being affected?

So your job's updates are not being tracked properly?

former_member467822
Active Participant
0 Kudos

The limit isn't with BODI specifically - unless it's trying to pull all of the data into memory and perform some kind of lookup or other complicated function on the data before performing the update. The problem is probably related to the performance of the query it is generating.

Try using the view generated sql option in the validation menu and see what the query actually looks like. You'll want to compare this to the query you have already written.

It can take a few tries to get the sql query generator to produce the most efficient code.

If you've already written the sql query outside of BODI you could always swap out your first query with a SQL transform - this would guarantee that the sql is exactly as you expect. There are a number of reasons this is usually not recommended, but sometimes it really is the most efficient and dare I say easiest option.