Search recursive for files that contains a specific combination of strings on the first line












3














I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice".



Here is are some exemples of files :



1.csv :



StockID Dept    Cat2    Cat4    Cat5    Cat6    Cat1    Cat3    Title   Notes   Active  Weight  Sizestr Colorstr    Quantity    Newprice    StockCode   DateAdded   SellPrice   PhotoQuant  PhotoStatus Description stockcontrl Agerestricted
<blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />rnA wartime RAF aircrew oxygen hose connector.<br />rn<br />rnAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />rn<br />rnBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />rn<br />rnIn a good condition. 2 0
<blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />rnWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />rn<br />rnType RS 38-A.<br />rn<br />rnMade by Telephonics Corp.<br />rn<br />rnIn a un-issued condition. 3 0
<blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />rnPost War RAF Pilot Seat Type Parachute Harness.<br />rn<br />rnThis Irvin manufactured harness is 'new old' stock and is unissued.<br />rn<br />rnThe label states Irvin Harness type C, Mk10, date 1976.<br />rnIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />rn<br />rnFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />rn<br />rnThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />rn<br />rnIdeal for the re-enactor or collector (Not sold for parachuting).<br />rn<br />rnTotal weight of 4500 gms. 3 0


2.csv :



id  user_id organization_id hash    name    email   date    first_name  hear_about
1 2 15 <blank> Fairley teisjdaijdsaidja@domain.com 1129889679 John 0


I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ;
So in this exemple, i want to output only ./1.csv



I manage to do this, but i`m stuck now ;(



where=$(find "./backup -type f)
for x in $where; do
head -1 $x | grep -w "StockID"
done









share|improve this question





























    3














    I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice".



    Here is are some exemples of files :



    1.csv :



    StockID Dept    Cat2    Cat4    Cat5    Cat6    Cat1    Cat3    Title   Notes   Active  Weight  Sizestr Colorstr    Quantity    Newprice    StockCode   DateAdded   SellPrice   PhotoQuant  PhotoStatus Description stockcontrl Agerestricted
    <blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />rnA wartime RAF aircrew oxygen hose connector.<br />rn<br />rnAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />rn<br />rnBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />rn<br />rnIn a good condition. 2 0
    <blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />rnWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />rn<br />rnType RS 38-A.<br />rn<br />rnMade by Telephonics Corp.<br />rn<br />rnIn a un-issued condition. 3 0
    <blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />rnPost War RAF Pilot Seat Type Parachute Harness.<br />rn<br />rnThis Irvin manufactured harness is 'new old' stock and is unissued.<br />rn<br />rnThe label states Irvin Harness type C, Mk10, date 1976.<br />rnIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />rn<br />rnFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />rn<br />rnThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />rn<br />rnIdeal for the re-enactor or collector (Not sold for parachuting).<br />rn<br />rnTotal weight of 4500 gms. 3 0


    2.csv :



    id  user_id organization_id hash    name    email   date    first_name  hear_about
    1 2 15 <blank> Fairley teisjdaijdsaidja@domain.com 1129889679 John 0


    I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ;
    So in this exemple, i want to output only ./1.csv



    I manage to do this, but i`m stuck now ;(



    where=$(find "./backup -type f)
    for x in $where; do
    head -1 $x | grep -w "StockID"
    done









    share|improve this question



























      3












      3








      3







      I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice".



      Here is are some exemples of files :



      1.csv :



      StockID Dept    Cat2    Cat4    Cat5    Cat6    Cat1    Cat3    Title   Notes   Active  Weight  Sizestr Colorstr    Quantity    Newprice    StockCode   DateAdded   SellPrice   PhotoQuant  PhotoStatus Description stockcontrl Agerestricted
      <blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />rnA wartime RAF aircrew oxygen hose connector.<br />rn<br />rnAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />rn<br />rnBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />rn<br />rnIn a good condition. 2 0
      <blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />rnWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />rn<br />rnType RS 38-A.<br />rn<br />rnMade by Telephonics Corp.<br />rn<br />rnIn a un-issued condition. 3 0
      <blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />rnPost War RAF Pilot Seat Type Parachute Harness.<br />rn<br />rnThis Irvin manufactured harness is 'new old' stock and is unissued.<br />rn<br />rnThe label states Irvin Harness type C, Mk10, date 1976.<br />rnIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />rn<br />rnFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />rn<br />rnThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />rn<br />rnIdeal for the re-enactor or collector (Not sold for parachuting).<br />rn<br />rnTotal weight of 4500 gms. 3 0


      2.csv :



      id  user_id organization_id hash    name    email   date    first_name  hear_about
      1 2 15 <blank> Fairley teisjdaijdsaidja@domain.com 1129889679 John 0


      I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ;
      So in this exemple, i want to output only ./1.csv



      I manage to do this, but i`m stuck now ;(



      where=$(find "./backup -type f)
      for x in $where; do
      head -1 $x | grep -w "StockID"
      done









      share|improve this question















      I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice".



      Here is are some exemples of files :



      1.csv :



      StockID Dept    Cat2    Cat4    Cat5    Cat6    Cat1    Cat3    Title   Notes   Active  Weight  Sizestr Colorstr    Quantity    Newprice    StockCode   DateAdded   SellPrice   PhotoQuant  PhotoStatus Description stockcontrl Agerestricted
      <blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />rnA wartime RAF aircrew oxygen hose connector.<br />rn<br />rnAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />rn<br />rnBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />rn<br />rnIn a good condition. 2 0
      <blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />rnWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />rn<br />rnType RS 38-A.<br />rn<br />rnMade by Telephonics Corp.<br />rn<br />rnIn a un-issued condition. 3 0
      <blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />rnPost War RAF Pilot Seat Type Parachute Harness.<br />rn<br />rnThis Irvin manufactured harness is 'new old' stock and is unissued.<br />rn<br />rnThe label states Irvin Harness type C, Mk10, date 1976.<br />rnIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />rn<br />rnFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />rn<br />rnThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />rn<br />rnIdeal for the re-enactor or collector (Not sold for parachuting).<br />rn<br />rnTotal weight of 4500 gms. 3 0


      2.csv :



      id  user_id organization_id hash    name    email   date    first_name  hear_about
      1 2 15 <blank> Fairley teisjdaijdsaidja@domain.com 1129889679 John 0


      I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ;
      So in this exemple, i want to output only ./1.csv



      I manage to do this, but i`m stuck now ;(



      where=$(find "./backup -type f)
      for x in $where; do
      head -1 $x | grep -w "StockID"
      done






      linux awk grep find head






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 9 at 17:18









      RomanPerekhrest

      22.8k12346




      22.8k12346










      asked Dec 9 at 15:16









      Jonson

      182




      182






















          4 Answers
          4






          active

          oldest

          votes


















          6














          find + awk solution:



          find ./backup -type f -exec 
          awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} ;


          In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/.





          In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files):



          find ./backup -type f -exec 
          awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +





          share|improve this answer



















          • 1




            i own you a coffee! it works.
            – Jonson
            Dec 9 at 15:56






          • 1




            @user000001, see my update
            – RomanPerekhrest
            Dec 9 at 18:26










          • how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
            – iruvar
            Dec 9 at 23:29












          • @iruvar, good hint. I've made an update. Thanks.
            – RomanPerekhrest
            Dec 10 at 5:45



















          1














          With GNU grep or compatible:



          grep -Hrnm1 '^' ./backup | sed -n '/StockID.*SellPrice/s/:1:.*//p'


          The recursive grep will print the first line of each file and print a filename:1:line without reading through the whole file (the -m1 flag should make it exit at the 1st match) and the sed will print the filename where the line part matches the pattern.



          This will fail with file names which contain a :1: themselves or newline characters, but this is a risk worth taking instead of putting up some slow find + awk combo which executes another process for each file.






          share|improve this answer































            1














            To avoid running one command per file and reading the entire files, with GNU awk:



            (unset -v POSIXLY_CORRECT; exec find backup/ -type f -exec gawk '
            /<StockID>/ && /<SellPrice>/ {print FILENAME}; {nextfile}' {} +)


            Or with zsh:



            set -o rematchpcre # where we know for sure b is supported
            for file (backup/**/*(ND.)) {
            IFS= read -r line < $file &&
            [[ $line =~ "bStockIDb" ]] &&
            [[ $line =~ "bSellPriceb" ]] &&
            print -r $file
            }


            Or:



            set -o rematchpcre
            print -rl backup/**/*(D.e:'
            IFS= read -r line < $REPLY &&
            [[ $line =~ "bStockIDb" ]] &&
            [[ $line =~ "bSellPriceb" ]]':)


            Or with bash on systems where native extended regular expressions support <, > word boundary operators (on others, you some others, you could also try [[:<:]]/[[:>:]] or b instead):



            RE1='<StockId>' RE2='<SellPrice>' find backup -type f -exec bash -c '
            for file do
            IFS= read -r line < "$file" &&
            [[ $line =~ $RE1 ]] &&
            [[ $line =~ $RE2 ]] &&
            printf "%sn" "$file"
            done' bash {} +





            share|improve this answer























            • Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
              – iruvar
              Dec 11 at 1:36










            • @iruvar thanks. fixed now.
              – Stéphane Chazelas
              Dec 11 at 7:48



















            0














            egrep + awk :



             egrep -Hrn 'StockID|SellPrice' ./backup | awk -F ':' '$2==1{print $1}'





            share|improve this answer





















              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "106"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f486931%2fsearch-recursive-for-files-that-contains-a-specific-combination-of-strings-on-th%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              4 Answers
              4






              active

              oldest

              votes








              4 Answers
              4






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              6














              find + awk solution:



              find ./backup -type f -exec 
              awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} ;


              In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/.





              In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files):



              find ./backup -type f -exec 
              awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +





              share|improve this answer



















              • 1




                i own you a coffee! it works.
                – Jonson
                Dec 9 at 15:56






              • 1




                @user000001, see my update
                – RomanPerekhrest
                Dec 9 at 18:26










              • how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
                – iruvar
                Dec 9 at 23:29












              • @iruvar, good hint. I've made an update. Thanks.
                – RomanPerekhrest
                Dec 10 at 5:45
















              6














              find + awk solution:



              find ./backup -type f -exec 
              awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} ;


              In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/.





              In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files):



              find ./backup -type f -exec 
              awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +





              share|improve this answer



















              • 1




                i own you a coffee! it works.
                – Jonson
                Dec 9 at 15:56






              • 1




                @user000001, see my update
                – RomanPerekhrest
                Dec 9 at 18:26










              • how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
                – iruvar
                Dec 9 at 23:29












              • @iruvar, good hint. I've made an update. Thanks.
                – RomanPerekhrest
                Dec 10 at 5:45














              6












              6








              6






              find + awk solution:



              find ./backup -type f -exec 
              awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} ;


              In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/.





              In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files):



              find ./backup -type f -exec 
              awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +





              share|improve this answer














              find + awk solution:



              find ./backup -type f -exec 
              awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} ;


              In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/.





              In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files):



              find ./backup -type f -exec 
              awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} +






              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Dec 10 at 5:44

























              answered Dec 9 at 15:51









              RomanPerekhrest

              22.8k12346




              22.8k12346








              • 1




                i own you a coffee! it works.
                – Jonson
                Dec 9 at 15:56






              • 1




                @user000001, see my update
                – RomanPerekhrest
                Dec 9 at 18:26










              • how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
                – iruvar
                Dec 9 at 23:29












              • @iruvar, good hint. I've made an update. Thanks.
                – RomanPerekhrest
                Dec 10 at 5:45














              • 1




                i own you a coffee! it works.
                – Jonson
                Dec 9 at 15:56






              • 1




                @user000001, see my update
                – RomanPerekhrest
                Dec 9 at 18:26










              • how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
                – iruvar
                Dec 9 at 23:29












              • @iruvar, good hint. I've made an update. Thanks.
                – RomanPerekhrest
                Dec 10 at 5:45








              1




              1




              i own you a coffee! it works.
              – Jonson
              Dec 9 at 15:56




              i own you a coffee! it works.
              – Jonson
              Dec 9 at 15:56




              1




              1




              @user000001, see my update
              – RomanPerekhrest
              Dec 9 at 18:26




              @user000001, see my update
              – RomanPerekhrest
              Dec 9 at 18:26












              how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
              – iruvar
              Dec 9 at 23:29






              how about making it more efficient with awk 'FNR == 1 &&/StockID.*SellPrice/ {print FILENAME}; {nextfile }' {} +. This will lead to fewer invocations of awk
              – iruvar
              Dec 9 at 23:29














              @iruvar, good hint. I've made an update. Thanks.
              – RomanPerekhrest
              Dec 10 at 5:45




              @iruvar, good hint. I've made an update. Thanks.
              – RomanPerekhrest
              Dec 10 at 5:45













              1














              With GNU grep or compatible:



              grep -Hrnm1 '^' ./backup | sed -n '/StockID.*SellPrice/s/:1:.*//p'


              The recursive grep will print the first line of each file and print a filename:1:line without reading through the whole file (the -m1 flag should make it exit at the 1st match) and the sed will print the filename where the line part matches the pattern.



              This will fail with file names which contain a :1: themselves or newline characters, but this is a risk worth taking instead of putting up some slow find + awk combo which executes another process for each file.






              share|improve this answer




























                1














                With GNU grep or compatible:



                grep -Hrnm1 '^' ./backup | sed -n '/StockID.*SellPrice/s/:1:.*//p'


                The recursive grep will print the first line of each file and print a filename:1:line without reading through the whole file (the -m1 flag should make it exit at the 1st match) and the sed will print the filename where the line part matches the pattern.



                This will fail with file names which contain a :1: themselves or newline characters, but this is a risk worth taking instead of putting up some slow find + awk combo which executes another process for each file.






                share|improve this answer


























                  1












                  1








                  1






                  With GNU grep or compatible:



                  grep -Hrnm1 '^' ./backup | sed -n '/StockID.*SellPrice/s/:1:.*//p'


                  The recursive grep will print the first line of each file and print a filename:1:line without reading through the whole file (the -m1 flag should make it exit at the 1st match) and the sed will print the filename where the line part matches the pattern.



                  This will fail with file names which contain a :1: themselves or newline characters, but this is a risk worth taking instead of putting up some slow find + awk combo which executes another process for each file.






                  share|improve this answer














                  With GNU grep or compatible:



                  grep -Hrnm1 '^' ./backup | sed -n '/StockID.*SellPrice/s/:1:.*//p'


                  The recursive grep will print the first line of each file and print a filename:1:line without reading through the whole file (the -m1 flag should make it exit at the 1st match) and the sed will print the filename where the line part matches the pattern.



                  This will fail with file names which contain a :1: themselves or newline characters, but this is a risk worth taking instead of putting up some slow find + awk combo which executes another process for each file.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Dec 10 at 7:14









                  Stéphane Chazelas

                  299k54563913




                  299k54563913










                  answered Dec 10 at 0:58









                  Uncle Billy

                  1935




                  1935























                      1














                      To avoid running one command per file and reading the entire files, with GNU awk:



                      (unset -v POSIXLY_CORRECT; exec find backup/ -type f -exec gawk '
                      /<StockID>/ && /<SellPrice>/ {print FILENAME}; {nextfile}' {} +)


                      Or with zsh:



                      set -o rematchpcre # where we know for sure b is supported
                      for file (backup/**/*(ND.)) {
                      IFS= read -r line < $file &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]] &&
                      print -r $file
                      }


                      Or:



                      set -o rematchpcre
                      print -rl backup/**/*(D.e:'
                      IFS= read -r line < $REPLY &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]]':)


                      Or with bash on systems where native extended regular expressions support <, > word boundary operators (on others, you some others, you could also try [[:<:]]/[[:>:]] or b instead):



                      RE1='<StockId>' RE2='<SellPrice>' find backup -type f -exec bash -c '
                      for file do
                      IFS= read -r line < "$file" &&
                      [[ $line =~ $RE1 ]] &&
                      [[ $line =~ $RE2 ]] &&
                      printf "%sn" "$file"
                      done' bash {} +





                      share|improve this answer























                      • Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                        – iruvar
                        Dec 11 at 1:36










                      • @iruvar thanks. fixed now.
                        – Stéphane Chazelas
                        Dec 11 at 7:48
















                      1














                      To avoid running one command per file and reading the entire files, with GNU awk:



                      (unset -v POSIXLY_CORRECT; exec find backup/ -type f -exec gawk '
                      /<StockID>/ && /<SellPrice>/ {print FILENAME}; {nextfile}' {} +)


                      Or with zsh:



                      set -o rematchpcre # where we know for sure b is supported
                      for file (backup/**/*(ND.)) {
                      IFS= read -r line < $file &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]] &&
                      print -r $file
                      }


                      Or:



                      set -o rematchpcre
                      print -rl backup/**/*(D.e:'
                      IFS= read -r line < $REPLY &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]]':)


                      Or with bash on systems where native extended regular expressions support <, > word boundary operators (on others, you some others, you could also try [[:<:]]/[[:>:]] or b instead):



                      RE1='<StockId>' RE2='<SellPrice>' find backup -type f -exec bash -c '
                      for file do
                      IFS= read -r line < "$file" &&
                      [[ $line =~ $RE1 ]] &&
                      [[ $line =~ $RE2 ]] &&
                      printf "%sn" "$file"
                      done' bash {} +





                      share|improve this answer























                      • Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                        – iruvar
                        Dec 11 at 1:36










                      • @iruvar thanks. fixed now.
                        – Stéphane Chazelas
                        Dec 11 at 7:48














                      1












                      1








                      1






                      To avoid running one command per file and reading the entire files, with GNU awk:



                      (unset -v POSIXLY_CORRECT; exec find backup/ -type f -exec gawk '
                      /<StockID>/ && /<SellPrice>/ {print FILENAME}; {nextfile}' {} +)


                      Or with zsh:



                      set -o rematchpcre # where we know for sure b is supported
                      for file (backup/**/*(ND.)) {
                      IFS= read -r line < $file &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]] &&
                      print -r $file
                      }


                      Or:



                      set -o rematchpcre
                      print -rl backup/**/*(D.e:'
                      IFS= read -r line < $REPLY &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]]':)


                      Or with bash on systems where native extended regular expressions support <, > word boundary operators (on others, you some others, you could also try [[:<:]]/[[:>:]] or b instead):



                      RE1='<StockId>' RE2='<SellPrice>' find backup -type f -exec bash -c '
                      for file do
                      IFS= read -r line < "$file" &&
                      [[ $line =~ $RE1 ]] &&
                      [[ $line =~ $RE2 ]] &&
                      printf "%sn" "$file"
                      done' bash {} +





                      share|improve this answer














                      To avoid running one command per file and reading the entire files, with GNU awk:



                      (unset -v POSIXLY_CORRECT; exec find backup/ -type f -exec gawk '
                      /<StockID>/ && /<SellPrice>/ {print FILENAME}; {nextfile}' {} +)


                      Or with zsh:



                      set -o rematchpcre # where we know for sure b is supported
                      for file (backup/**/*(ND.)) {
                      IFS= read -r line < $file &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]] &&
                      print -r $file
                      }


                      Or:



                      set -o rematchpcre
                      print -rl backup/**/*(D.e:'
                      IFS= read -r line < $REPLY &&
                      [[ $line =~ "bStockIDb" ]] &&
                      [[ $line =~ "bSellPriceb" ]]':)


                      Or with bash on systems where native extended regular expressions support <, > word boundary operators (on others, you some others, you could also try [[:<:]]/[[:>:]] or b instead):



                      RE1='<StockId>' RE2='<SellPrice>' find backup -type f -exec bash -c '
                      for file do
                      IFS= read -r line < "$file" &&
                      [[ $line =~ $RE1 ]] &&
                      [[ $line =~ $RE2 ]] &&
                      printf "%sn" "$file"
                      done' bash {} +






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Dec 11 at 7:47

























                      answered Dec 10 at 7:08









                      Stéphane Chazelas

                      299k54563913




                      299k54563913












                      • Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                        – iruvar
                        Dec 11 at 1:36










                      • @iruvar thanks. fixed now.
                        – Stéphane Chazelas
                        Dec 11 at 7:48


















                      • Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                        – iruvar
                        Dec 11 at 1:36










                      • @iruvar thanks. fixed now.
                        – Stéphane Chazelas
                        Dec 11 at 7:48
















                      Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                      – iruvar
                      Dec 11 at 1:36




                      Stéphane , I'm guessing you're going to have to account for $0, so the done' {} + should read something like done' bash {} + ?
                      – iruvar
                      Dec 11 at 1:36












                      @iruvar thanks. fixed now.
                      – Stéphane Chazelas
                      Dec 11 at 7:48




                      @iruvar thanks. fixed now.
                      – Stéphane Chazelas
                      Dec 11 at 7:48











                      0














                      egrep + awk :



                       egrep -Hrn 'StockID|SellPrice' ./backup | awk -F ':' '$2==1{print $1}'





                      share|improve this answer


























                        0














                        egrep + awk :



                         egrep -Hrn 'StockID|SellPrice' ./backup | awk -F ':' '$2==1{print $1}'





                        share|improve this answer
























                          0












                          0








                          0






                          egrep + awk :



                           egrep -Hrn 'StockID|SellPrice' ./backup | awk -F ':' '$2==1{print $1}'





                          share|improve this answer












                          egrep + awk :



                           egrep -Hrn 'StockID|SellPrice' ./backup | awk -F ':' '$2==1{print $1}'






                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Dec 9 at 21:29









                          msp9011

                          3,69743863




                          3,69743863






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Unix & Linux Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.





                              Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                              Please pay close attention to the following guidance:


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f486931%2fsearch-recursive-for-files-that-contains-a-specific-combination-of-strings-on-th%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Ellipse (mathématiques)

                              Quarter-circle Tiles

                              Mont Emei