logo
down
shadow

Improve the efficiency of my PowerShell script


Improve the efficiency of my PowerShell script

By : Vasil Silianov
Date : November 22 2020, 07:01 PM
seems to work fine The following should speed up your task substantially:
If the intent is truly to look for the search words in the file names:
code :
$searchWords = (Get-Content 'C:\temp\list.txt') -split ','
$path = 'C:\Users\david.craven\Dropbox\Facebook Asset Tagging\_SJC Warehouse_\_Project Completed_\2018\A*'

Get-ChildItem -File -Path $path -Recurse -PipelineVariable file |
  Select-Object -ExpandProperty Name |
    Select-String -List -SimpleMatch -Pattern $searchWords |
      Select-Object @{n='SearchWord'; e={$_.Pattern}},
                    @{n='FoundFile'; e={$file.FullName}} |
        Export-Csv C:\temp\output.csv -NoTypeInformation
$searchWords = (Get-Content 'C:\temp\list.txt') -split ','
$path = 'C:\Users\david.craven\Dropbox\Facebook Asset Tagging\_SJC Warehouse_\_Project Completed_\2018\A*'

Get-ChildItem -File -Path $path -Recurse |
  Select-String -SimpleMatch -Pattern $searchWords |
    Select-Object @{n='SearchWord'; e={$_.Pattern}},
                  @{n='FoundFile'; e={$_.Path}} |
      Export-Csv C:\temp\output.csv -NoTypeInformation


Share : facebook icon twitter icon
PowerShell Script Efficiency

PowerShell Script Efficiency


By : user3281451
Date : March 29 2020, 07:55 AM
Does that help I use PowerShell as much as possible for quick and easy scripting tasks; A lot of times during my job I will use it for data parsing, log file sifting, or for creating CSV\Text files. , You're killing your performance right here:
code :
$retValue += ("{0}{1}" -f $tmp.Substring(0, $tmp.Length - $i.ToString().Length), $i)
$start = Get-Date
$retValue =
for ($left = 97; $left -lt 123; $left++)
{ 
    for ($middle = 97; $middle -lt 123; $middle++)
    { 
        for ($right = 97; $right -lt 123; $right++)
        { 
            for ($i = 1; $i -lt 1000; $i++)
            { 
                $tmp = ("{0}{1}{2}000" -f [char]$left, [char]$middle, [char]$right)
                "{0}{1}" -f $tmp.Substring(0, $tmp.Length - $i.ToString().Length), $i
            }
        }
    }
}
Write-Host ("Time: {0} minutes" -f ((get-date)-$start).TotalMinutes)
Time: 1.866812045 minutes
Powershell trying to improve my File-Deletion script

Powershell trying to improve my File-Deletion script


By : Jerome Du
Date : March 29 2020, 07:55 AM
around this issue You could cook the entire script body down to just 3 lines (2 if you move Get-Date into the Where-Object filter):
code :
Param (
  [Parameter(Mandatory=$true)]
  [string]$Path,
  [Parameter(Mandatory=$true)]
  [int]$Days,
  [Parameter(Mandatory=$false)]
  [switch]$Recurse
)

# Define the age threshold
$Threshold = (Get-Date).AddDays(-$Days)

# Remove the Days parameter from the input arguments
$PSBoundParameters.Remove('Days')

# Splat the remaining arguments and filter out files newer than $Days 
Get-ChildItem @PSBoundParameters -File|Where-Object {$_.LastAccessTime -lt $Threshold} |%{ Remove-Item $_.FullName }
Improve efficiency of PIG Script

Improve efficiency of PIG Script


By : Bagus R Santoso
Date : March 29 2020, 07:55 AM
will be helpful for those in need After grouping sort the grouping by counts in descending order and get the topmost record.
code :
A1 = LOAD 'data.txt' USING PigStorage(',') AS (ID:int , Category:chararray);
A2 = DISTINCT A1;
A3 = GROUP A2 BY Category;
A4 = FOREACH A3 GENERATE group AS Category, COUNT(A2.ID) AS Number;
A5 = ORDER A4 BY Number DESC;
A6 = LIMIT A5 1;
DUMP A6.$0;
Improve loading efficiency of App Script code in Google Sheets

Improve loading efficiency of App Script code in Google Sheets


By : user3657514
Date : March 29 2020, 07:55 AM
Hope that helps Custom functions in Google Apps Script tend to take long time to process and I wouldn't recommend to use it in several cells. I would like to understand better what you trying to do with this data in order to answer properly, but anyway, I would try one of these two solutions:
1 - Inline formula:
code :
function calculateColumnE() {
  var sheet = SpreadsheetApp.openById('some-id').getSheetByName('some-name');
  var row_count = sheet.getLastRow();
  var input_data = sheet.getRange(1, 1, row_count, 4).getValues();
  var data = [];
  for (var i = 0; i < row_count; i++) {
      var row_data; // this variable will receive value for column E in this row
      /*
      ...
      manage input_data here
      ...
      */
      data.push([row_data]); // data array MUST be a 2 dimensional array
  }

  sheet.getRange(1, 5, data.length, 1).setValues(data);
}
function TAGS(input,textreplacement) { //keeping your original function
  if (input.length > 0) {
    var lst = input.split(",")
    var rep = textreplacement.match(/<[^>]*>/g)
    for (i in rep){
      textreplacement = textreplacement.replace(rep[i],'"'+lst[i]+'"')
    }
    return textreplacement
  }
  else{
    return textreplacement
  }
}

function calculateColumnE() {
  var sheet = SpreadsheetApp.openById('some-id').getSheetByName('some-name');
  var row_count = sheet.getLastRow();
  var input_data = sheet.getRange(1, 1, row_count, 4).getValues();
  var data = [];
  for (var i = 0; i < row_count; i++) {
      var row_data; // this variable will receive value for column E in this row
      if (input_data[i][0] == "Spec") {
        row_data = "# SPEC " + input_data[i][1];
      } else if (input_data[i][0] == "Scenario") {
        row_data = "## " + input_data[i][1];
      } else if (input_data[i][0] == "Step") {
        row_data = "* " + TAGS(input_data[i][2], input_data[i][3]);
      } else if (input_data[i][0] == "Tag") {
        row_data = "Tags: " + input_data[i][1].replace(/\s/, ''); // not sure what this is doing
      } else if (input_data[i][0] == "") {
        row_data = "";
      }
      data.push([row_data]); // data array MUST be a 2 dimensional array
  }
  sheet.getRange(1, 5, data.length, 1).setValues(data);
}
How to improve the efficiency of the script?

How to improve the efficiency of the script?


By : Алексей Дуки
Date : March 29 2020, 07:55 AM
I wish did fix the issue. Though this particular piece of code can be slightly optimized, the time complexity will still remain O(m*n), where m, n are the number of keys in each dict.
Since dict_1 has 4K keys, and dict_2 has 100K keys, total combinations to iterate over
Related Posts Related Posts :
  • Why AQTime slows execution even when profiling is not on, and can anything be done for it?
  • Very slow guards in my monadic random implementation (haskell)
  • Oracle: Insertion on an indexed table, avoiding duplicates. Looking for tips and advice
  • What's the best way to store Logon User information for Web Application?
  • Best way to retrieve certain field of all documents returned by a Lucene search
  • CakePHP - Set recursive to -1 in AppModel then use Containable behaviour as appropriate
  • Memory Bandwidth Performance for Modern Machines
  • Is there a way to throttle or limit resources used by a user in Oracle?
  • what does the s3 prefix means with respect to scale?
  • Cassandra slow performance on AWS
  • Racket streams slower than custom streams?
  • Why does performance drop when a function is moved to another module?
  • My Cors Preflight Options Request Seems Slow
  • Is there a "Try" equivelent of DATETIME2FROMPARTS?
  • Python-iris performing very slowly in reading netcdf datasets
  • Matrix vs function for matrix operations
  • Find similarities between two lists
  • H2o: Is there a way to fix threshold in H2ORandomForestEstimator performance during training and testing?
  • Does it make sense to use n_jobs = -1 both for inner and outer loop?
  • How is APL optimized to have great performance at array processing? What are some example tricks and optimizations it pe
  • How do I interpret this difference in matrix multiplication GFLOP/s?
  • Why is this simple Haskell program so slow?
  • Why is my pixel manipulation function so slow?
  • How to correctly dockerize and continuously integrate 20GB raw data?
  • Will errorWithoutStackTrace be faster than error when there isn't HasCallStack?
  • Check permission; always a request against database?
  • Performance improvement jsf
  • Does the running of a second thread on an hyperthreaded CPU introduce extra overhead throughout the pipeline?
  • Multiple MPI communications and performance
  • What about type instability hurts peformance so much?
  • Why select distinct partitioned column is very slow?
  • Do FP and integer division compete for the same throughput resources on x86 CPUs?
  • What is FLOPS in field of deep learning?
  • Why does Vec::retain run slower after updating to Rust 1.38.0?
  • Slowdowns of performance of ets select
  • How do I read the Network Tab in Chrome DevTools - Load Times
  • in Snowflake, Does resize an existing warehouse helps in improving the performance of a running query?
  • Do single threaded programs execute in parallel in a CPU?
  • Chrome Web Requests getting stuck for 8 seconds
  • Dynatrace: what is the meaning of srv in dtCookie Cookie set by UEM
  • Why does Raku perform so bad with multidimensional arrays?
  • Neo4j query taking long time
  • What causes this high variability in cycles for a simple tight loop with -O0 but not -O3, on a Cortex-A72?
  • Why is my SSE assembly slower in release builds?
  • From S3 to Snowflake and performance
  • Why these 2 similar queries in Snowflake have very different performance?
  • shadow
    Privacy Policy - Terms - Contact Us © 35dp-dentalpractice.co.uk