Master-Slave MySQL replication probem

Support forum for the ViciBox ISO Server Install and ISO LiveCD Demo

Moderators: enjay, williamconley, Staydog, mflorell, MJCoate, mcargile, Kumba

Master-Slave MySQL replication probem

Postby sonidhaval » Thu Aug 11, 2011 2:32 am

Dear All,

We have two servers running ViCiDial. One server is using OpenSuse 11.x with ViCiBox and another CentOS 5.6 with ViCiDial.

We are trying to set up master-slave mysql replication between these 2 servers.

We have configured both servers for replication and working fine but when we place a call, immediately getting error on slave server, which is as below:

Last_Error: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.

How to solve this and start replication ?

Thank you,

Regards,
Dhaval Soni
sonidhaval
 
Posts: 3
Joined: Wed Aug 10, 2011 4:31 pm

Postby williamconley » Thu Aug 11, 2011 10:32 am

why are you trying to set up master slave replication between two vicidial servers? if you are trying to "cluster them" to work as a team, this approach will not work.

http://www.poundteam.com/downloads/Vici ... v1%201.pdf

If you want to create a "hot spare" system, that's a different story.
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20018
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Postby sonidhaval » Thu Aug 11, 2011 12:15 pm

We are just trying to make our 2nd system as a backup of mysql and ViCiDial.

Is there any other alternative for the same?
sonidhaval
 
Posts: 3
Joined: Wed Aug 10, 2011 4:31 pm

Postby williamconley » Thu Aug 11, 2011 2:56 pm

if you want a backup, you can create a replicated database but do not run any vicidial scripts on the database until you "activate" the box. since the two systems would both be modifying the same sql tables based on the same criteria, they would fight each other.

replication will offer the most up-to-date hot system capabilities, but mysqldump, ftp transfer, and mysql asterisk < dumpfile.sql on the new machine will give a clean copy at timed intervals quite reliably.

remember that your hot spare will not have the same IP address and therefore will not be as "hot" as it could be unless you ALSO run the server ip update script each time you transfer the data. if you do run that script, however, the backup system could be VERY ready with all scripts active and "just launch" any time you put agents on it (by changing the DNS entry of the site the agents and admins log onto). Obviously that would be a bad thing to do on a replicated system as that would immediately "unsync" the systems.

The only drawback of the data push version becoming live is that it would immediately call all the prospects since the last data push (as they will be "next in line" just like they were when the data was pushed).

An alternative may be to replicate into the hot spare server, but change the astguiclient.conf file to point to the wrong database ... then fix astguiclient.conf, run the server update ip script, and reboot. In about 3 minutes your hot spare is Live.
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20018
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Postby sonidhaval » Thu Aug 25, 2011 2:49 pm

Hi,

Can we not make clustering of Full ViCiBox installed severs ? We want DB running on each server. Let's say, we have one main ViCiDial server and another with the same config as a backup or for hot spare server.

If main server goes down, we want another server in action ASAP.

How can we do that ?

Can we do that?

Thanks,
Dhaval Soni
sonidhaval
 
Posts: 3
Joined: Wed Aug 10, 2011 4:31 pm

Postby williamconley » Thu Aug 25, 2011 5:34 pm

so you want a complete backup server with everything identical on it "in standby". i don't see why not. you could probably use database replication for this, but beware as the IP will change on the new server when it "fires up" so you'll need to change the ip and reboot to "make it go live".

i have an idea for how to avoid that, but it's convoluted. LOL
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20018
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Re: Master-Slave MySQL replication probem

Postby brandon.parncutt » Sat Jul 18, 2015 1:10 pm

I've set up a Vicidial Cluster (active-passive) using Corosync/Pacemaker, DRBD (for mysql) and some rsync scripts. Failover works flawlessly (agent calls get dropped if it happens during business hours, but within a minute or two everything is back up on its own). It's not terribly difficult...and I've been meaning to post a tutorial in the forum...I can answer questions if anyone has them.
brandon.parncutt
 
Posts: 16
Joined: Tue May 12, 2015 3:10 pm
Location: Florida

Re: Master-Slave MySQL replication probem

Postby brandon.parncutt » Sat Jul 18, 2015 1:12 pm

*A DRBD volume is also used for the /var/spool/asterisk as well.
brandon.parncutt
 
Posts: 16
Joined: Tue May 12, 2015 3:10 pm
Location: Florida

Re: Master-Slave MySQL replication probem

Postby williamconley » Thu Jul 23, 2015 11:41 pm

brandon.parncutt wrote:I've set up a Vicidial Cluster (active-passive) using Corosync/Pacemaker, DRBD (for mysql) and some rsync scripts. Failover works flawlessly (agent calls get dropped if it happens during business hours, but within a minute or two everything is back up on its own). It's not terribly difficult...and I've been meaning to post a tutorial in the forum...I can answer questions if anyone has them.

Is this a vicidial cluster? How many agents/calls were "live" during your heaviest run so far?

Fear: What if it works up to 20 agents (40 calls) but can not survive 30 (with 60 calls, and can't even almost consider 150 agents with 300 calls) ... 8-)
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20018
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Re: Master-Slave MySQL replication probem

Postby brandon.parncutt » Tue Jul 28, 2015 12:58 pm

We are a rather small call center, but we have 30 agents...all on calls and it handles it fine. Load remains around 2 on the database server. This is a quad core Xeon system with hardware RAID 10 on SSD disks, and 32GB RAM. I split the services in half between the two active nodes in the cluster. Node 1 handles asterisk and apache (load on this server never hits two), node 2 is the database. One node could easly handle that number of agents however...I've done that as well. Also, i have a slave DB on another server set up for replication and all reporting functions. I'm not sure how much load goes up based on the number of agents, but I think running 50+ wouldn't be an issue. I wouldn't expect the cluster's limit until closer to 100. We're expanding so time will tell.
brandon.parncutt
 
Posts: 16
Joined: Tue May 12, 2015 3:10 pm
Location: Florida

Re: Master-Slave MySQL replication probem

Postby williamconley » Sat Aug 15, 2015 1:35 pm

That sounds about right.

Keep an eye on it as you grow. Perhaps even a "load logger" so you can avoid "gut feeling" or what you 'remember" of the previous load during heavy usage.

And be prepared to understand that Average Load does not always reflect Surge load which can hammer the system when individual commands slam the system temporarily, but don't last long. Remember that Replication STILL requires actually having a timed process write all recent changes (since the last write) to a temp file for transfer to the replication slave. That can cause a spike that does not show prominently in the average, and during that process tables are locked for write access, which can cause ... hiccups. And I won't even talk about what happens if the replication fails ... rewriting the entire replication set (for the replication reset) has brought down many enterprise-level systems with no easily seen cause for "down!" (replication is not easy for the average user to see happening ...).

So when you build your system watcher utility ... be sure to include replication status into it for "the next guy" (and yourself ... in case it runs so smoothly you forget to check next year when the first hiccup occurs!).
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20018
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Re: Master-Slave MySQL replication probem

Postby brandon.parncutt » Mon Aug 31, 2015 12:53 pm

Yes, I've noticed the spikes, and I'm aware that the load average one sees in top or uptime isn't always an accurate representation. I also know the relation between i/o load and CPU load, and am very familiar with tuning the linux kernel, interrupts, and CPU affinity. I/O load is not an issue, especially now that I have my caching working: I finally got my battery for the write cache on the RAID controllers so now I have battery-backed write cache...and my load has dropped to below one. My monitoring utilities keep an eye on replication status and alert when it falls behind. Also when this happens I have a script which resets all the reporting functions to use the main database server as opposed to the slave. I've had the replication server go down once actually (bad drive for the root partition, which was an old ssd I found here and not part of the RAID array) so when I got the server back up and got replication started again, it was over 10000 seconds behind. It caught up in a matter of minutes, however, and then reporting was switched back to the slave. Error 1062 can safely be skipped on the slave, and seems to be the source of most of the headaches on getting the slave back up without doing another dump and restoring it/starting replication again from scratch. Also, you do not always have to take such an extreme measure to get replication up and running again. If your review the binlogs with mysqlbinlog on both the master and slave, you can find the transaction which might've failed and is causing replication to stop and resume replication at a different location...it might have to play catch up, but it will and replication will resume.

Also, I'm not sure what type of replication you are using, but the replication process itself does not lock tables as you described. It makes use of binary logging (binlogs) which are written copies of the actual SQL transactions, and are then read and recreated as relay logs on the slave. It is a continuous I/O thread that runs over the master/slave connection. This page explains the process quite well:

https://www.percona.com/blog/2013/01/09 ... ally-work/

I'm still quite busy so I haven't gotten around to writing a tutorial. I'm working two sysadmin jobs, and recently started my own consulting company...but I will get one written up sometime. Maybe I'll post it on my company's site as well...although that is definitely one of our options we offer to clients and make considerable money using. Anyway, thanks for the feedback. I love this product, SIP technology in general, and the open source community you guys have here.
brandon.parncutt
 
Posts: 16
Joined: Tue May 12, 2015 3:10 pm
Location: Florida


Return to ViciBox Server Install and Demo

Who is online

Users browsing this forum: No registered users and 58 guests