This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm that you accept these cookies being set.

Reading trends from RRD in Lua
#21
Hello everyone
is it possible to have a CSV file with all curves with this method?

Thanks.
BR
Reply
#22
For sure but you must change a script because there are 2 things:
1) how connect 2 trends with different resolution
2) be aware about the size of generated .csv (because of email limits and because of LM CPU power and memory)

This is why I'm not created the script for doing all trends in one .csv.

I think that better is create such script which generate all trends one by one with some pause between each email. And then you have all scripts and e.g. you can connect them via excel(if they are not too big) or via nodejs script or some windows program for such things.
Done is better than perfect
Reply
#23
(29.08.2018, 08:49)buuuudzik Wrote: For sure but you must change a script because there are 2 things:
1) how connect 2 trends with different resolution
2) be aware about the size of generated .csv (because of email limits and because of LM CPU power and memory)

This is why I'm not created the script for doing all trends in one .csv.

I think that better is create such script which generate all trends one by one with some pause between each email. And then you have all scripts and e.g. you can connect them via excel(if they are not too big) or via nodejs script or some windows program for such things.

Indeed the CPU can be used too much.
After making a backup there is no possibility to recover all of this data in a file?
I would like to retrieve all the data on 65 curves (same resolution) for 6 months ... it's really a lot by mail

Thank you for Reply.
BR
Reply
#24
I've updated a trends_pro library to allow simple modifying its data and adding more calculated column.

Code:
-- This function prepare csv table content from data and passed headers
function generateCSVTable(rows, headers)
  if #rows > 0 then
    if #rows[1] > #headers then
      error("There should be " .. tostring(#rows[1]) .. " header columns!")
    end
  end
 
  -- Prepare .csv file
  for k,v in ipairs(rows) do     
      rows[k] = table.concat(v, ",")
    end

  csv = table.concat(headers, ",") .. "\r\n" .. table.concat(rows, "\r\n")
  return csv
end

-- Allows to add columns based on a current row or a previous one to current row
function addColumnsToRow(row, prevRow, columnGetters)
  for k,getter in ipairs(columnGetters) do
    table.insert(row, getter(row, prevRow))
  end
  return row
end

-- Allows to add columns based on a current row or a previous one to every row
function addColumnsToRows(rows, columnGetters)
  local prev = nil
  for k,row in ipairs(rows) do     
    rows[k] = addColumnsToRow(row, prevRow, columnGetters)
    prev = row
    end
  return rows
end

function copyObject(obj)
  return json.decode(json.encode(obj))
end

-- This function prepare export table and .csv file from defined range
function trend_export(name, dates, mode, timestampType)
 
  require('trends')

  -- detect timeShift
  function detectTimeShift(t0, t1)
  t0_isdst, t1_isdst = os.date('*t', t0).isdst, os.date('*t', t1).isdst

    if (t0_isdst ~= t1_isdst) then
      if (t1_isdst) then
        -- change to summertime
        return 1
      else
        -- change to wintertime
        return 2
      end
    else
      return false
    end
  end
 
  require("json")
  dates = copyObject(dates)

  -- Convert word "now" in "start" to data table
  if type(dates['start']) == "string" then
    if dates['start'] == "now" then dates['start'] = os.date("*t", os.time())
    elseif dates['start'] == "yesterday" then dates['start'] = os.date("*t", os.time()-24*3600)
    elseif dates['start'] == "week_ago" then dates['start'] = os.date("*t", os.time()-7*24*3600)
    elseif dates['start'] == "month_ago" then dates['start'] = os.date("*t", os.time()-30*24*3600)
    elseif dates['start'] == "year_ago" then dates['start'] = os.date("*t", os.time()-365*24*3600)
    end
  end

  -- Convert word "now" in "end" to data table
  if type(dates['end']) == "string" then
    if dates['end'] == "now" then dates['end'] = os.date("*t", os.time())
    elseif dates['end'] == "yesterday" then dates['end'] = os.date("*t", os.time()-24*3600)
    elseif dates['end'] == "week_ago" then dates['end'] = os.date("*t", os.time()-7*24*3600)
    elseif dates['end'] == "month_ago" then dates['end'] = os.date("*t", os.time()-30*24*3600)
    elseif dates['end'] == "year_ago" then dates['end'] = os.date("*t", os.time()-365*24*3600)
    end
  end         

  -- Start and end in unix time
  starttime = os.time(dates['start'])
  endtime = os.time(dates['end'])

  -- Invert "start" and "end" when "end" is smaller than "start"
  if starttime > endtime then
    _ = dates['start']
    dates['start'] = dates['end']
    dates['end'] = _
  end

  -- Read trend resolution from DB
  resolution = db:getlist("SELECT resolution*60 FROM trends WHERE name='" .. name .. "'")[1]

  -- Prepare a table with the data which include the whole data from specified range
  if resolution then
      -- Expand a range with a one more day if there is some data in the last day
    if (os.date("%H", endtime)*3600 + os.date("%M", endtime)*60) >= resolution then
      dates['end'].day = dates['end'].day + 1
      raw_data = trends.fetch(name, dates, resolution)
    else
      raw_data = trends.fetch(name, dates, resolution)
    end

    start = 1
    data = {}
    start_timestamp = os.time(dates['start']) + resolution
    -- Cut the data and prepare a table with the data only from specified range
    offset = 0
    skip = 0
    k = 1
    while k <= #raw_data do
      v = raw_data[k]
     
      if k == 1 then last_timestamp = start_timestamp else last_timestamp = current_timestamp end
      current_timestamp = start_timestamp + (k-1) * resolution + offset
           
      timeShiftStatus = detectTimeShift(last_timestamp, current_timestamp)
      if timeShiftStatus then
        if timeShiftStatus == 1 then -- winter->summer
          skip = 6 -- skip 1 hour empty data
          offset = offset - 3600
        elseif timeShiftStatus == 2 then -- summer->winter
          offset = offset + 3600
        end
      end
     
      -- Add to new table only records from specified range
      if current_timestamp >= starttime and current_timestamp <= endtime then
        if timestampType == 'unixepoch' then data[start] = {current_timestamp, v}
        else data[start] = {os.date('%Y.%m.%d %H:%M:%S', current_timestamp), v} end
        start = start+1
      end

      k = k + 1 + skip
      skip = 0
    end
    
    if mode == 'csv' then
      -- Prepare .csv file
      csv = generateCSVTable(data, { "timestamp", "value" })
      return name, csv, resolution
    elseif mode == 'table' then
      return name, data, resolution
    end
  else
      alert('There is no trend with such a name')
  end
end

-- Merge values from multiple trends in one table
function mergeRows(baseRows, rows)
  for k,row in ipairs(baseRows) do
    if k < 3 then log(k, "base", row) end
    table.insert(baseRows[k], rows[k][2])
    if k < 3 then log(k, "next", rows[k]) end
  end
  return baseRows
end

function getMultipleColumnHeader(names)
  local columns = { "timestamp" }
  for k,name in ipairs(names) do
    table.insert(columns, name)
  end
  return columns
end

-- This function prepare export table and .csv file from defined range
function multiple_trend_export(names, dates, mode, timestampType)
  local baseRows = {}
 
  for k,name in ipairs(names) do
    local trend_name, rows = trend_export(name, dates, 'table', timestampType)
    if #baseRows == 0 then
      baseRows = rows
    else
      baseRows = mergeRows(baseRows, rows)
    end
  end
 
  if mode == 'csv' then
    -- Prepare .csv file
    csv = generateCSVTable(baseRows, getMultipleColumnHeader(names))
    return names, csv, resolution
  elseif mode == 'table' then
    return names, baseRows, resolution
  end
end

What's changed:
I've added these function:

addColumnsToRows(rows, columnGetters) - which will allow to add more columns and calculate values based on current row and a previous one

generateCSVTable(rows, headers) - which will produce a csv table content with named headers which should have same length as data rows

multiple_trend_export(names, dates, mode, timestampType) - which can be used for generating a table with multiple trends

Usage example:
Code:
require('user.trends_pro')


dates = {}
dates['start'] = 'yesterday'
dates['end'] = 'now'

name, rows = trend_export("Salon_Temperatura", dates, 'table', 'unixepoch')

addColumnsToRows(rows, {
    function(row, prevRow)
      local value = row[2]
     
      local prevValue = 0
      if prevRow then prevValue = prevRow[2] end
     
     
      if value > 3 or prevValue > 3 then
        return "YES"
      else
        return "NO"
      end
    end
})

csv = generateCSVTable(rows, { 'timestamp', 'value', 'isBiggerThan3' })

log(name, csv)


-- Generating a table with multiple trends in same rows
names, rows = multiple_trend_export({"Kitchen_Temp", "Outdoor_Temp"}, dates, 'csv', 'unixepoch')

log(names, rows)
Done is better than perfect
Reply
#25
Hi!

Thanks for your effort with this Lib ! I is very useful !

I have tried it and noticed a small difference from the Export from Visualization and the Export from the Script.

The resolution for this trends I have tested are 30min.

There is an offset of 30 minutes in the timestamp:


Code:
-- This the timewindow I have used to test

dates = {}
dates['start'] = { year = 2022, month = 6, day = 1,  hour = 0, min = 0, sec = 0}
dates['end']   = { year = 2022, month = 7, day = 1,  hour = 0, min = 0, sec = 0}

   

   
Reply


Forum Jump: