First error: Query of LOB fields caused heap usage to exceed limit
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}
up vote
1
down vote
favorite
I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,
- execute() method has the line List results = (List)database.query(query); , and
- start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and
- batch size is 2000 and,
- I do not have any long text fields/attachments or such kind of large content fields
trying to execute the above batch resulting the below error;
"Query of LOB fields caused heap usage to exceed limit."
Can someone explain/suggest;
why the error is thrown even the record limit is 10000?
are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)
"query" variable in the first point contains the below SOQL;
SELECT ownervalue__c,
managerlevel3__c,
isdeleted,
ownerid__c,
previous_owner__c,
enddate__c,
isteammanager__c,
iscreatorloggedin__c,
id,
networkresponsevalue__c,
createddate,
createddatelist__c,
managerlevel1__c,
statusduration__c,
status__c,
fieldschanged__c,
ownername__c,
startdate__c,
assignedbyfillinbox__c,
createdbyid,
statusvalid__c,
lastmodifiedbyid,
previous_owner_id__c,
recordtypeid,
agent__c,
duration__c,
lastmodifieddate,
ischannelmanager__c,
businesstime__c,
durationmins__c,
previous_owner_name__c,
day_of_the_week__c,
networkcommentsvalue__c,
systemmodstamp,
durationhours__c,
managerlevel2__c,
creatorrole__c,
ignore__c,
fax__c,
network__c,
businesshours__c,
business_hours_status__c,
case__c,
name,
escalationtier__c,
business_hour_duration__c,
ccmanager__c,
subsequentstatusvalue__c,
start_date_hour__c,
durationseconds__c,
statusvalue__c,
case_contact_id__c,
creatorprofile__c
FROM statustracking__c
WHERE id NOT IN (SELECT statustracking__c
FROM archivelog__c
WHERE sobjecttype__c = 'StatusTracking__c'
AND ( status__c = 'Archived'
OR status__c = 'Failed To Archive'
OR status__c = 'Failed To Delete Original' ))
AND createddate < :olderThan
ORDER BY createddate ASC
NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.
apex soql batch limits heap
add a comment |
up vote
1
down vote
favorite
I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,
- execute() method has the line List results = (List)database.query(query); , and
- start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and
- batch size is 2000 and,
- I do not have any long text fields/attachments or such kind of large content fields
trying to execute the above batch resulting the below error;
"Query of LOB fields caused heap usage to exceed limit."
Can someone explain/suggest;
why the error is thrown even the record limit is 10000?
are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)
"query" variable in the first point contains the below SOQL;
SELECT ownervalue__c,
managerlevel3__c,
isdeleted,
ownerid__c,
previous_owner__c,
enddate__c,
isteammanager__c,
iscreatorloggedin__c,
id,
networkresponsevalue__c,
createddate,
createddatelist__c,
managerlevel1__c,
statusduration__c,
status__c,
fieldschanged__c,
ownername__c,
startdate__c,
assignedbyfillinbox__c,
createdbyid,
statusvalid__c,
lastmodifiedbyid,
previous_owner_id__c,
recordtypeid,
agent__c,
duration__c,
lastmodifieddate,
ischannelmanager__c,
businesstime__c,
durationmins__c,
previous_owner_name__c,
day_of_the_week__c,
networkcommentsvalue__c,
systemmodstamp,
durationhours__c,
managerlevel2__c,
creatorrole__c,
ignore__c,
fax__c,
network__c,
businesshours__c,
business_hours_status__c,
case__c,
name,
escalationtier__c,
business_hour_duration__c,
ccmanager__c,
subsequentstatusvalue__c,
start_date_hour__c,
durationseconds__c,
statusvalue__c,
case_contact_id__c,
creatorprofile__c
FROM statustracking__c
WHERE id NOT IN (SELECT statustracking__c
FROM archivelog__c
WHERE sobjecttype__c = 'StatusTracking__c'
AND ( status__c = 'Archived'
OR status__c = 'Failed To Archive'
OR status__c = 'Failed To Delete Original' ))
AND createddate < :olderThan
ORDER BY createddate ASC
NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.
apex soql batch limits heap
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,
- execute() method has the line List results = (List)database.query(query); , and
- start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and
- batch size is 2000 and,
- I do not have any long text fields/attachments or such kind of large content fields
trying to execute the above batch resulting the below error;
"Query of LOB fields caused heap usage to exceed limit."
Can someone explain/suggest;
why the error is thrown even the record limit is 10000?
are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)
"query" variable in the first point contains the below SOQL;
SELECT ownervalue__c,
managerlevel3__c,
isdeleted,
ownerid__c,
previous_owner__c,
enddate__c,
isteammanager__c,
iscreatorloggedin__c,
id,
networkresponsevalue__c,
createddate,
createddatelist__c,
managerlevel1__c,
statusduration__c,
status__c,
fieldschanged__c,
ownername__c,
startdate__c,
assignedbyfillinbox__c,
createdbyid,
statusvalid__c,
lastmodifiedbyid,
previous_owner_id__c,
recordtypeid,
agent__c,
duration__c,
lastmodifieddate,
ischannelmanager__c,
businesstime__c,
durationmins__c,
previous_owner_name__c,
day_of_the_week__c,
networkcommentsvalue__c,
systemmodstamp,
durationhours__c,
managerlevel2__c,
creatorrole__c,
ignore__c,
fax__c,
network__c,
businesshours__c,
business_hours_status__c,
case__c,
name,
escalationtier__c,
business_hour_duration__c,
ccmanager__c,
subsequentstatusvalue__c,
start_date_hour__c,
durationseconds__c,
statusvalue__c,
case_contact_id__c,
creatorprofile__c
FROM statustracking__c
WHERE id NOT IN (SELECT statustracking__c
FROM archivelog__c
WHERE sobjecttype__c = 'StatusTracking__c'
AND ( status__c = 'Archived'
OR status__c = 'Failed To Archive'
OR status__c = 'Failed To Delete Original' ))
AND createddate < :olderThan
ORDER BY createddate ASC
NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.
apex soql batch limits heap
I simply try to execute a batch job and execute a SOQL inside the execute() method as below with having,
- execute() method has the line List results = (List)database.query(query); , and
- start() method queries 10000 records (SELECT Id FROM statustracking__c LIMIT :recordsLimit)[**see the note], and
- batch size is 2000 and,
- I do not have any long text fields/attachments or such kind of large content fields
trying to execute the above batch resulting the below error;
"Query of LOB fields caused heap usage to exceed limit."
Can someone explain/suggest;
why the error is thrown even the record limit is 10000?
are there any alternative ways that I could process 100000 except PK chunking? (this much is what I actually want to process)
"query" variable in the first point contains the below SOQL;
SELECT ownervalue__c,
managerlevel3__c,
isdeleted,
ownerid__c,
previous_owner__c,
enddate__c,
isteammanager__c,
iscreatorloggedin__c,
id,
networkresponsevalue__c,
createddate,
createddatelist__c,
managerlevel1__c,
statusduration__c,
status__c,
fieldschanged__c,
ownername__c,
startdate__c,
assignedbyfillinbox__c,
createdbyid,
statusvalid__c,
lastmodifiedbyid,
previous_owner_id__c,
recordtypeid,
agent__c,
duration__c,
lastmodifieddate,
ischannelmanager__c,
businesstime__c,
durationmins__c,
previous_owner_name__c,
day_of_the_week__c,
networkcommentsvalue__c,
systemmodstamp,
durationhours__c,
managerlevel2__c,
creatorrole__c,
ignore__c,
fax__c,
network__c,
businesshours__c,
business_hours_status__c,
case__c,
name,
escalationtier__c,
business_hour_duration__c,
ccmanager__c,
subsequentstatusvalue__c,
start_date_hour__c,
durationseconds__c,
statusvalue__c,
case_contact_id__c,
creatorprofile__c
FROM statustracking__c
WHERE id NOT IN (SELECT statustracking__c
FROM archivelog__c
WHERE sobjecttype__c = 'StatusTracking__c'
AND ( status__c = 'Archived'
OR status__c = 'Failed To Archive'
OR status__c = 'Failed To Delete Original' ))
AND createddate < :olderThan
ORDER BY createddate ASC
NOTE: ** queries all the records with the limit 10000 and moved the filtering to the execute() method, it was before in start() method and thrown the "First error: [REQUEST_RUNNING_TOO_LONG] Your request was running for too long, and has been stopped." error.
apex soql batch limits heap
apex soql batch limits heap
asked Nov 19 at 0:04
Hasantha
1,1991934
1,1991934
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
3
down vote
The query you are executing in your execute()
method does not appear to contain any reference to or filter based upon the scope
parameter received by execute()
.
What this means is that in terms of executing this query, what you do in your start()
method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.
Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate
is indexed, your NOT IN
subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c
and Status__c
aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.
I would recommend you do one of two things:
- Construct a more idiomatic batch class, where you execute a single query in your
start()
method and process each batch inexecute()
without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this. - If you are not able to do so, ensure that the query you perform in your
execute()
method is filtered by theIds
of the records in the scope for this batch. This will allow you to dramatically limit the bounds of eachexecute()
query in terms of both execution time, via anId IN :idSet
type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records ofStatusTracking__c
in your database, which may make the overall performance of the process unacceptable. - In either route, tune your batch size to make each invocation of
execute()
realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.
PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
The query you are executing in your execute()
method does not appear to contain any reference to or filter based upon the scope
parameter received by execute()
.
What this means is that in terms of executing this query, what you do in your start()
method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.
Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate
is indexed, your NOT IN
subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c
and Status__c
aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.
I would recommend you do one of two things:
- Construct a more idiomatic batch class, where you execute a single query in your
start()
method and process each batch inexecute()
without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this. - If you are not able to do so, ensure that the query you perform in your
execute()
method is filtered by theIds
of the records in the scope for this batch. This will allow you to dramatically limit the bounds of eachexecute()
query in terms of both execution time, via anId IN :idSet
type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records ofStatusTracking__c
in your database, which may make the overall performance of the process unacceptable. - In either route, tune your batch size to make each invocation of
execute()
realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.
PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.
add a comment |
up vote
3
down vote
The query you are executing in your execute()
method does not appear to contain any reference to or filter based upon the scope
parameter received by execute()
.
What this means is that in terms of executing this query, what you do in your start()
method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.
Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate
is indexed, your NOT IN
subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c
and Status__c
aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.
I would recommend you do one of two things:
- Construct a more idiomatic batch class, where you execute a single query in your
start()
method and process each batch inexecute()
without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this. - If you are not able to do so, ensure that the query you perform in your
execute()
method is filtered by theIds
of the records in the scope for this batch. This will allow you to dramatically limit the bounds of eachexecute()
query in terms of both execution time, via anId IN :idSet
type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records ofStatusTracking__c
in your database, which may make the overall performance of the process unacceptable. - In either route, tune your batch size to make each invocation of
execute()
realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.
PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.
add a comment |
up vote
3
down vote
up vote
3
down vote
The query you are executing in your execute()
method does not appear to contain any reference to or filter based upon the scope
parameter received by execute()
.
What this means is that in terms of executing this query, what you do in your start()
method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.
Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate
is indexed, your NOT IN
subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c
and Status__c
aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.
I would recommend you do one of two things:
- Construct a more idiomatic batch class, where you execute a single query in your
start()
method and process each batch inexecute()
without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this. - If you are not able to do so, ensure that the query you perform in your
execute()
method is filtered by theIds
of the records in the scope for this batch. This will allow you to dramatically limit the bounds of eachexecute()
query in terms of both execution time, via anId IN :idSet
type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records ofStatusTracking__c
in your database, which may make the overall performance of the process unacceptable. - In either route, tune your batch size to make each invocation of
execute()
realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.
PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.
The query you are executing in your execute()
method does not appear to contain any reference to or filter based upon the scope
parameter received by execute()
.
What this means is that in terms of executing this query, what you do in your start()
method and what you set your batch size to are both irrelevant. You're just running this query in an asynchronous context apropos of nothing, and you're hitting a limit - either the heap limit or the query processing time limit.
Absent further details on your data model and your code, the query looks to me like a major performance problem. While CreatedDate
is indexed, your NOT IN
subquery looks like it is probably very inefficient, particularly if - but not only if - ArchiveLog__c.SObjectType__c
and Status__c
aren't indexed and the data volume in that object is large. And you are pulling back a very large number of fields on potentially a very large number of records.
I would recommend you do one of two things:
- Construct a more idiomatic batch class, where you execute a single query in your
start()
method and process each batch inexecute()
without trying to perform additional broad-based queries. If possible given the full details of your data model and objective, tune your query to make this possible by using indexed fields, avoiding negative criteria, and following other best practices for large data volumes. The Query Plan tool may help you achieve this. - If you are not able to do so, ensure that the query you perform in your
execute()
method is filtered by theIds
of the records in the scope for this batch. This will allow you to dramatically limit the bounds of eachexecute()
query in terms of both execution time, via anId IN :idSet
type filter, and in terms of the heap space consumed. Note that pursuing this route means that your batch class will ultimately have to traverse all of the records ofStatusTracking__c
in your database, which may make the overall performance of the process unacceptable. - In either route, tune your batch size to make each invocation of
execute()
realistically completable within a single transaction. 2000 is a very large batch. If you encounter limits issues within a single batch, turn the batch size down until you're able to complete them successfully.
PK chunking is a technique you can use to execute very large scale queries with the Bulk API, not via Batch Apex.
edited Nov 19 at 0:44
answered Nov 19 at 0:24
David Reed
26.7k51745
26.7k51745
add a comment |
add a comment |
Thanks for contributing an answer to Salesforce Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f239779%2ffirst-error-query-of-lob-fields-caused-heap-usage-to-exceed-limit%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown