this part is important for you. how many rows does this table has? how many of them has "PUR" value for res_value column? Are PUR_DATE and PUR columns varchar2 or date? when you run query above (with "select *", no group by) how many rows are returning? does your table statistics up to date ?

ps: AndrewSayer I wrote this post without seeing yours, you already ask whatever I asked.

Share execution plan and the actual selectivities if that doesn't solve your problem.

-edit

It seems you editted your post after Id seen it and really the column is a varchar2. Were you not aware of this? Are you aware of how many problems this causes you? Can you fix the table so you are storing data in the correct types and therefore constrain the data properly and give the cost based optimizer the minimum amount of information it needs (stats would also be useful).

The fact that PUR_DATE is a varchar2 makes this awkward. Predicates such as

AND PUR_DATE >= TRUNC (SYSDATE - 1)

may be disastrous, because they will be relying on implicit type casting. Don't even think about it.

You do need to say how many rows you expect from the combined predicate and how many for each of the predicates individually. Then you could consider creating a virtual column populated as to_date(pur_date, 'dd/mm/yyyy') and creating a composite index on RES_VALUE plus the virtual column (or perhaps the other way around) and adjusting your query to filter on the virtual column.

Note that you may have a problem with statistics. It would seem likely that analyzing once a week means that your stats know nothing of rows entered in the last day.

Why do you use the TRUNC around your date columns ? You use trunc around the sysdate, that seems enough.

I couldn't understand it

does your table statistics up to date ?

Every week we gather stats.

Any Suggestions

Thanks,

As you've been told the design of this table is seriously flawed, using a VARCHAR2 datatype rather than using DATE. The query relies on implicit date conversion, which I would't rely on for production data. I blogged on such mistakes in design here:

Additionally what you're doing is causing problems with the optimizer since Oracle can't know these values are dates; looking at an example using a VARCHAR2 column storing the date it produces incorrect results:

SQL> create table datetst(

2 myid number,

3 mydt varchar2(20));

Table created.

SQL>

SQL> begin

2 for i in 1..100 loop

3 insert into datetst

4 values(i, to_char(sysdate+i, 'MM/DD/RRRR'));

5 end loop;

6

7 commit;

8 end;

9 /

PL/SQL procedure successfully completed.

SQL>

SQL> select *

2 from datetst

3 where mydt <= sysdate +10;

where mydt <= sysdate +10

*

ERROR at line 3:

ORA-01843: not a valid month

SQL>

SQL> select *

2 from datetst

3 where mydt <= to_char(sysdate +10, 'MM/DD/RRRR');

MYID MYDT

---------- --------------------

1 11/16/2016

2 11/17/2016

3 11/18/2016

4 11/19/2016

5 11/20/2016

6 11/21/2016

7 11/22/2016

8 11/23/2016

9 11/24/2016

10 11/25/2016

47 01/01/2017

MYID MYDT

---------- --------------------

48 01/02/2017

49 01/03/2017

50 01/04/2017

51 01/05/2017

52 01/06/2017

53 01/07/2017

54 01/08/2017

55 01/09/2017

56 01/10/2017

57 01/11/2017

58 01/12/2017

MYID MYDT

---------- --------------------

59 01/13/2017

60 01/14/2017

61 01/15/2017

62 01/16/2017

63 01/17/2017

64 01/18/2017

65 01/19/2017

66 01/20/2017

67 01/21/2017

68 01/22/2017

69 01/23/2017

MYID MYDT

---------- --------------------

70 01/24/2017

71 01/25/2017

72 01/26/2017

73 01/27/2017

74 01/28/2017

75 01/29/2017

76 01/30/2017

77 01/31/2017

78 02/01/2017

79 02/02/2017

80 02/03/2017

MYID MYDT

---------- --------------------

81 02/04/2017

82 02/05/2017

83 02/06/2017

84 02/07/2017

85 02/08/2017

86 02/09/2017

87 02/10/2017

88 02/11/2017

89 02/12/2017

90 02/13/2017

91 02/14/2017

MYID MYDT

---------- --------------------

92 02/15/2017

93 02/16/2017

94 02/17/2017

95 02/18/2017

96 02/19/2017

97 02/20/2017

98 02/21/2017

99 02/22/2017

100 02/23/2017

64 rows selected.

SQL>

The first error occurs because of the default NLS date format; an explicit conversion needed to take place to get any results from the query. Notice also that this query returned 64 rows of data when the expected output should have been no more than 10 rows (given the data in the table).

Fixing that (and, yes, I know you can't fix this by changing the table definition) a different result is presented:

SQL> drop table datetst purge;

Table dropped.

SQL>

SQL> create table datetst(

2 myid number,

3 mydt date);

Table created.

SQL>

SQL> begin

2 for i in 1..100 loop

3 insert into datetst

4 values(i, sysdate+i);

5 end loop;

6

7 commit;

8 end;

9 /

PL/SQL procedure successfully completed.

SQL>

SQL> select *

2 from datetst

3 where mydt <= sysdate +10;

MYID MYDT

---------- ---------

1 16-NOV-16

2 17-NOV-16

3 18-NOV-16

4 19-NOV-16

5 20-NOV-16

6 21-NOV-16

7 22-NOV-16

8 23-NOV-16

9 24-NOV-16

10 25-NOV-16

10 rows selected.

SQL>

Now the 10 correct rows are returned and there wasn't any 'fiddling' with the query to get things to process.

Part of your problem, I believe, goes back to the first part of this example where Oracle returned 64 rows when only 10 should have met the criteria. I expect the poor decision of making the PUR_DATE column a VARCHAR2 datatype is causing far more records to be returned, records that do NOT actually meet the criteria, inflating the result set needlessly and causing more work for Oracle.