Donate. I desperately need donations to survive due to my health

Get paid by answering surveys Click here

Click here to donate

Remote/Work from Home jobs

UpdatingSheetswithCSVavoidingDuplicates

I am updating google sheets with csv file in my local system using Python. I am using Pandas library and pygsheets to do the task.

The dataframe I have in my local system is as Test:

enter image description here

The dataframe I have in google sheets is as follows : enter image description here

I am trying to update the dataframe in sheets to the one I have in my local system. This is just a sample of file as the file too big.

I am using the following code:

import pandas as pd
import pygsheets
gc = pygsheets.authorize(service_file='cs.json')
sh = gc.open_by_url('GOOGLESHEETLINK')
wks = sh[0]
Test = pd.read_csv('test.csv')
wks.set_dataframe(Test,'B1')

it is updating the records but it is taking longer time as it is basically uploading the csv. I want to refresh it basically so that only the records which doesn't exist should be added. Consider Request ID as the unique. Also, it should check for the request ID if the records doesn't exist in sheets with reference to csv file(Test) it should add at the bottom. But if it does exist it should check if it same as in the Test it should not make any changes.

For example : for Request ID : 8 Status has been changed to In Progress in the Test. So it should do the same in Sheets. Whereas for Request ID : 9 record remain same so it should not make any changes.

And as Request ID : 12 doesn't exist in the sheets with reference to Test. It should add.

Thank you in advance for the help!

Comments