on 2016 Apr 23 6:21 AM
One of our cron job that does batch processing of orders has poor performance because of high no. of database calls it makes.As the job has to handle huge no. of items, because of lazy-loading, it ends making db call even for fetching a single attribute which is proving to detrimental. Can some one suggest what would be the preferred approach to avoid these many db hits. If I cache the items upfront, will it help? will it store all attributes values as well in the cache?
why not load the necessary data into a DTO directly?
@Override
public List<ProductPriceModelData> fetchProductPriceModelDatas(List<String> productCodes, CatalogVersionModel catalogVersion)
{
List<List<String>> productCodeChunks = ListUtils.partition(productCodes, SQL_SAP_HANA_IN_CLAUSE_ELEMENT_LIMIT);
List<ProductPriceModelData> datas = new ArrayList<>();
// split database query into smaller ones, that have limited IN-clause size to avoid sap hana sql error
for (List<String> productCodeChunk : productCodeChunks)
{
StringBuilder sb = new StringBuilder();
sb.append("SELECT");
sb.append(" {p." + ProductModel.PK + "},");
sb.append(" {p." + ProductModel.CODE + "},");
sb.append(" {p." + ProductModel.ORIGINALPRICE + "},");
sb.append(" {pr." + PriceRowModel.PK + "},");
sb.append(" {pr." + PriceRowModel.PRICE + "}");
sb.append(" FROM");
sb.append(" {");
sb.append(" " + ProductModel._TYPECODE + " AS p");
sb.append(" LEFT JOIN " + PriceRowModel._TYPECODE + " AS pr ON {pr." + PriceRowModel.PRODUCT + "} = {p." + ProductModel.PK + "}");
sb.append(" }");
sb.append(" WHERE {p." + ProductModel.CATALOGVERSION + "} = ?catalogVersion");
sb.append(" AND {p." + ProductModel.CODE + "} IN (?productCodes)");
Map<String, Object> parameters = new HashMap<>();
parameters.put("catalogVersion", catalogVersion);
parameters.put("productCodes", productCodeChunk);
final FlexibleSearchQuery query = new FlexibleSearchQuery(sb.toString(), parameters);
query.setResultClassList(Arrays.asList(Long.class, String.class, Double.class, Long.class, Double.class));
final SearchResult<List<Object>> result = this.getFlexibleSearchService().search(query);
datas.addAll(result.getResult().stream().map(this::mapToData).collect(Collectors.toList()));
}
return datas;
}
private ProductPriceModelData mapToData(final List<Object> values)
{
ProductPriceModelData data = new ProductPriceModelData();
data.setProductPk((Long) values.get(0));
data.setProductCode((String) values.get(1));
data.setOriginalPrice((Double) values.get(2));
data.setPriceRowPk((Long) values.get(3));
data.setPriceRowValue((Double) values.get(4));
return data;
}
The query results are mapped to the DTO using mapToData. The query will be huge, but you get everything you need with one query, without the the infamous n+1 problem.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Caching is probably your best option if you're just modifying items already stored in the database. Is the processing on a specific type? You could always look at adding a new cache region.
Have you also tried to query for exactly what you need so you're not making unecessary calls? Have you had a DBA look at the processing to see if there are any improvements that can be made to the data model to ensure faster processing?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
If you modify an item that is in cache then it will invalidate the cached item and update the DB. It's not asynchronous. Cache will only be helpful on the read side if you're retrieving the same items again and again. If you're doing a write intensive operation then you'll need to ensure your DB connection is fast.
User | Count |
---|---|
24 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.