No announcement yet.

"Pick and Top of the Pops" No 1's

  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Here's Dave Taylor's quote (in an email to me 12 Nov 2012) about the NME mix-up:
    Similar story with NME in August 68. Tom Jones was given the points of Tommy James & on the 31st Aug, it prevented the Bee Gees, from going to the top. Had this mistake, not occurred, the 3 way BBC tie, would not of happened. The Bee Gees would of been sole #1, with Herb Alpert & the Beach Boys being joint #2!

    Here's yet another Dave post about this, on the Popscene forum 6 Sept 2011:
    The EMI average chart gives the #1 to Herb Alpert. This is the big one & settles the BBC joint number one, where BBC had Herb in a tie, with the Bee Gees "I've Gotta Get A Message To You" & the Beach Boys "Do It Again". The Beach Boys were a Record Retailer #1, so 85 shops to over 600 couldn't be right. Another chart mag of the time "Top Pops" Magazine had the Bee Gees at the top this week. "Top Pops" itself was teen mag that made a chart from 12 branches of WH Smith. It lasted from 1968 to March 1971.

    NOTE: Since I put this list together, evidence seems to suggest that Record Retailer actually had a unlisted joint number one on 31st Aug 68, with both the Beach Boys & the Bee Gees. Which changes things a bit, because it means that the BBC average, should of showed the Bee Gees as sole number one, with Herb & the Beach Boys at joint #2.

    Regarding the lag/drag on the BBC chart, they should have left Record Retailer out of the average altogether! That was the main source of the drag...

    Here's another tidbit I only found out last month, from Alan Smith. The NME chart that was published in the Sunday papers was a 'special' NME chart, that only sampled from the last 2 days of the sample week, covering the weekend higher volume sales, thus not the full week. And they sampled only about 20 of their largest record shops out of their much larger total. So the NME Sunday paper chart was indeed much 'faster'.

    According to Alan, Melody Maker was sampling around 265 record shops in Aug 1968, NME around 175, Record Retailer 80. Using those values in a weighted average, and the 3 charts as published, Herb Alpert actually does come out on top. Followed by the Bee Gees, then Beach Boys, then Tom Jones. But consider this in light of Dave Taylor's 3 quotes.

    Yes, if Melody Maker had been given 'official' status for the 60s, Lady Madonna and Little Red Rooster would lose out, but Please Please Me, Penny Lane, Strawberry Fields Forever, and 19th Nervous Breakdown (just to name a few) would have won. I'll take that trade, ha. But seriously, we should go for what we think is the most correct, most fair chart option; not use possible outcomes to determine that decision. But you know that, ha...


    • #92
      I think he is saying that NME gave SOME of the points to Jones that should have been allocated to James, and had this not happened the NME would have placed the Bee Gees at number one. But then you cannot assume everything else is unaffected, because it depends on where Jones falls to and where James rises to. Alpert could have been sole number 2 in the BBC chart as a result.

      MM also had a Sunday chart. I think the NotW had MM, and the Mirror and People had NME.

      Do you know what caused the RR drag? If it was because their smaller independent stores were out in the sticks and the local yokels there were less hip, then I don't think they should be excluded in a compo retrospective but agree there was a good argument for the BBC to exclude them.

      Incidentally I notice that the normally reliable Wikipedia page 'List of UK charts and number-one singles (1952-1969)' has the Bee Gees top in all 3 charts, which is incorrect as it didn't make no. 1 in MM.
      Last edited by Splodj; Tue September 3rd, 2019, 11:27. Reason: Further reflection on the Sunday papers


      • #93
        Edited post as on further reflection I think the Sundays were the other way round (MM in NotW).
        Last edited by Splodj; Tue September 3rd, 2019, 11:30.


        • #94
          I must apologize again for my comments concerning the Record Retailer chart. I retract my statement that the BBC should have left them out of their average calculation. They were a legit chart, they did what they did, and provided a Top 50 chart for half the 60s when no other chart did. I occasionally forget my problem is not with RR, it’s with the Official Charts Co for declaring RR the ‘official’ chart for the 60s. Even though RR sampled the fewest number of record shops, and disagreed with the other charts the most often.

          So why did RR drag behind the other charts? RR may have calculated their own chart relatively accurately, except for the occasional mistakes common to all charts, but their forced tiebreaking was detrimental, especially if they were looking at previous week data in doing so. If their record shop sampling was not diverse enough geographically across the entire UK, that would have been a problem. And their relatively low number of sampled shops.

          If the RR sampling week was of a different time period that the other charts, that would have been a problem, too. But Alan Smith is adamant that all the 60s charts sampled from Monday to Saturday, as per the chart compilers he personally interviewed. Period, end of discussion. Though I did find some posters on other forums that stated some different sampling periods. One Brian Hankin on the Haven forums said RR sampled from Saturday to Friday, then in July 1967 changed to Monday to Saturday. Alan refutes this, Monday to Saturday all the way. I think Dave Taylor may have mentioned some different sampling periods too, but without digging it up I don’t recall which charts and which days.

          Nonetheless, RR did compile their charts on a different day than the other charts. RR compiled on a Tuesday, the others on a Monday. RR also changed their publishing date in July 1967, which threw a major wrench into the charts that month. Instead of the usual 80 charts gathered for their calculations, only about 20 shops could adjust that quickly and get their returns in on time. They eventually recovered in time for August.

          As the 60s wore on, RR did get closer to the other charts, no doubt due to the increase in shops sampled. Again, all the charts were relatively close to each other, the big hits were the big hits on all the charts. It’s just that RR disagreed the most with the other charts when you look at specific chart positions.

          In terms of #1 records of the 60s:

          --NME had 7 records at #1 that did not reach #1 on any other chart, and did not have 2 records at #1 that were #1 on all the other charts = total of 9

          --Melody Maker had 7 records at #1 that did not reach #1 on any other chart, and did not have 6 records at #1 that were #1 on all the other charts = total of 13

          --Record Retailer had 11 records at #1 that did not reach #1 on any other chart, and did not have 6 records at #1 that were #1 on all the other charts = total of 17

          Also, RR had the fewest debuts at #1 compared to the other charts.

          What could explain all this? Either RR was the most accurate chart, or the least accurate. I think record shop sample size answers the question.

          When you look at all the Top 10s, it’s even more apparent. RR not only disagreed the most often with the other charts in terms of record peak position, but also in the distance from the average position. Across every time period of the 60s, whether 5 charts 1960-62, 4 charts 1962-67, or 3 charts 1967-69. The numbers are available in my posts in other threads here on UKMix. RR got closer to the other charts as the decade wore on, but they were always the least in agreement.

          I find it interesting that going into the 60s, the other 4 charts had been around for years, able to get things up and running, and get the kinks worked out, into becoming a smooth operation. But the OCC wants to give ‘official’ status to RR as soon as they published their first chart! Totally unreasonable.

          So RR, a good chart, not great, they were a piece of the puzzle, a Top 50 chart for half the 60s when the other charts were smaller. OCC, big mistake in choosing RR to represent the 60s. Not warranted at all when you look at the facts and data.

          Durn I’ve done it again, written a too long post, ugh…


          • #95
            Originally posted by RokinRobinOfLocksley View Post
            They were a legit chart, they did what they did, and provided a Top 50 chart for half the 60s when no other chart did.
            I've read this topic with great interest, and I don't want to dimish your post to a single line. But to me providing a Top 50 as the chart with the smallest sample size, where the others provided a Top 30, seems more like a negative. For two reasons. The first one being: the lower your sample size the less accurate you will be the lower you go on the chart. And the second reason being: the other charts chose to publish a smaller chart, even though their sample size was larger. Suggesting to me that they weren't confident in the lower ranks, or at the very least deemend it unneccesary within the state of the record business at the time.

            So even though RR consistently having the least 'agreeing' chart in the top spots is bad enough for the 'official' UK 1960s chart, it's almost worse to me that OCC also publishes the most uncertain, least accurately sampled ranks 31-50 as official. Especially because RR having a Top 50 throughout the 1960s seems to have been a (big?) factor in chosing it as the de facto chart to represent them.


            • #96
              I think that in March 1967, when MM decided the bottom part of it's chart was being compromised, instead of cutting back from 50 to 30 it should have taken measures to combat or minimise that abuse instead.


              • #97
                As far as I am aware they did that as well. There was an article to say they would monitor and take action.
       - for the latest are best chart book - By Decade!
                Now including NME, Record Mirror and Melody Maker from the UK and some Billboard charts


                • #98
                  I think MM said they would continue to collate a Top 50 internally to enable them to monitor the unpublished 31-50 positions for suspicious behaviour.

                  On another matter, I have been looking at the thread on Beatles EPs. Without doing a detailed analysis, the POTP chart positions for them appear roughly similar to those in MM, NME and Disc. As RR did not include EPs in their main chart (having a separate EP chart) I wondered if the BBC had a method to ensure that this did not drag down the average score for EPs in their chart.


                  • #99
                    Splodj, yes on the 1st paragraph, and I think yes on the 2nd. I don't recall Alan Smith or Dave Taylor specifically mentioning how the BBC adjusted their chart for EPs not being on the RR chart (as they had their own separate EP chart), but they must have done so.

                    Looking at The Beatles 'Twist and Shout' EP, it peaked on NME, MM, and Disc on 17 Aug 1963 at 4-2-3 for an average of #3. Coincidentally the BBC gave it a #3 for that week, so they must have assigned a value of #3 for RR as well. Otherwise, the BBC would have assigned 'Twist' a #21 for RR, which would have averaged out to a #7.5 .


                    • Originally posted by Splodj View Post
                      I think that in March 1967, when MM decided the bottom part of it's chart was being compromised, instead of cutting back from 50 to 30 it should have taken measures to combat or minimise that abuse instead.
                      I think they cut it back as the top 30 was seen as more accurate and had less chance of having records that were being bought up to make the lower regions of the chart, by record companies, hoping that the public would then buy the records. The "official chart" suffered with this problem for ages when it was a top 50, that is why TOTP only used the top 30 as a chart. You can almost certainly bet that any record that spent 4 weeks in the chart, not getting past 41 was bought by the record companies. They would get the record into the chart, then increase the effort on the second week, to ensure it was selling still, which might have resulted in a climb from 49 to say 41. Then they let it go or put less effort into it. If it had taken off then it moved past 41, if it hadn't then it fell to say 45. They might try to maintain the sales, but generally the record might fall out on week 3, or last another week.
                      This wasn't just done to new acts. One sales rep told me that they had pushed the Hollies - Long Cool Woman In A Black Dress in this way. Plus other well known acts. The vast majority of acts didn't know this was happening and would have been angry if they had known.
                      Education for anyone aged 12 to 16 has made a mess of the world!


                      • I don't know if anyone has calculated the average number of new entries each week, but seems there were about 4 for the 'Top 30' and about 6 for the 'Top 50'. (I suspect the precise numbers are 3 point something and 5 point something respectively for this period.) Looking at the net effect, this suggests that there were only about 2 records that entered the 50 but never reached the 30. It would be reasonable to expect some records to peak in the 31-50 section without any hanky panky, so I imagine the difficulty would have been distinguishing between these and those 'bought'.

                        As an aside, considering the dozens of new releases each week it is amazing how few became hits.
                        Last edited by Splodj; Tue September 10th, 2019, 10:22.


                        • Originally posted by RokinRobinOfLocksley View Post
                          Looking at The Beatles 'Twist and Shout' EP, it peaked on NME, MM, and Disc on 17 Aug 1963 at 4-2-3 for an average of #3. Coincidentally the BBC gave it a #3 for that week, so they must have assigned a value of #3 for RR as well. Otherwise, the BBC would have assigned 'Twist' a #21 for RR, which would have averaged out to a #7.5 .
                          A minor correction. Per Dave Taylor/Trevor Ager's BBC chart file notes, the BBC was calculating a Top 30 chart as of Oct 1962, but did not mention or broadcast positions 21-30, until Aug 1967. Dave/Trevor actually compared their calcs against BBC Derek Chinnery's during that time and found a few bugs 21-30. So, if the BBC had assigned 'Twist and Shout' a value of 31 for the missing RR chart, then the average BBC position would have been somewhere around #10. But as they gave it a #3, then they must have assigned it a #3 for RR. So that must have been the BBC EP rule.

                          There would also have been a rule for b-sides, as found on some charts, missing on others. No mention from Alan or Dave/Trevor about that either, but I'm sure it could be back-calculated to see what the BBC did.


                          • Yes you seem to have cracked the EP method.

                            With regard to B-sides, according to your example chart showing 'Peter Gunn / YEP' it appears that they allotted points according to the higher position of either side in the chart that split the record.

                            27th August 1967 is when POTP increased from 1 to 1.5 hours, which must explain why they needed to start referencing the Top 30. When Radio 1 & 2 started the following month it expanded again to 2 hours.


                            • A few more comments to some issues raised above:

                              Because of suspected chart hyping/rigging/cheating, MM and Disc went from a Top 50 down to a Top 30 in April 1967. NME stayed as a Top 30, RR stayed as a Top 50.

                              A reasonable person would think, at the very least, that the 'official' charts should have made MM 'official' from April 62 to April 67, as they too were a constant Top 50, and they sampled the most record shops, from 5 to 3 times as many as RR during that period.

                              But by choosing RR instead of MM to represent the 60s (by ‘official’ decree after 2001), the OCC is saying:

                              --it’s more important to have a Top 50 chart from 1960-69, from the LEAST representative/accurate chart

                              --than it is to have a Top 20 chart from 1960-62, to a Top 30 in 1962, to a Top 50 from 1962-67, to a Top 30 from 1967-69, from the MOST representative/accurate chart

                              I'm OK with using RR in some type of a combo chart, primarily for their more chart positions 31 to 50 from 1960-62, and 1967-69; in spite of their low sample. But also to smooth out the uncommon ‘stuff’ amongst all the charts. Per Alan:

                              --continued charting separate b-sides later into the 60s than the other charts
                              --used advance orders for chart debuts
                              --included LPs into 1968 (though only a handful)
                              --were subject to some lower chart position ‘rigging’

                              --were subject to some lower chart position ‘rigging’
                              --a good thing: the first (only?) chart to sample Northern Ireland in the 60s

                              --used advance orders for chart debuts
                              --charted LP’s into 1966 (though only a handful)

                              --forced tiebreakers
                              --almost no debuts at #1
                              --significantly the smallest record shop sample size
                              --no EPs (because they had their own separate EP charts)
                              --chart placements most out of step with the other charts
                              --was there a lag factor as well?
                              --erratic chart movements 1960-63, and July 1967
                              --chart compiled 1 day later than other charts (but was the sample period the same? Alan says yes, but also wonders about this…)

                              So, did RR lag the other charts consistently week to week, movements up and down? Probably worth doing a study:
                              --when did each record debut on each chart?
                              --when did each record peak on each chart?
                              --when did each record drop off each chart?

                              It'd also be worth comparing records that peaked #31-50: MM vs. RR 1962-66, and MM vs. Disc vs. RR 1966-67.

                              FYI, the 4 Guinness authors chose RR to represent the 60s because it had a constant Top 50 chart, as they were quoted as saying elsewhere, though not stated in the books. Except that Paul Gambaccini was not given a say in the matter. Guinness did not claim 50s and 60s 'official' status for NME and RR, rather they only stated which charts they were using 'for the purposes of this book'.

                              As for the 'official' charts, their argument since after 2001 has been that they chose RR as 'official' for the 60s because it was the 'industry' chart prior to Feb 1969. The most recent chart books refer to RR as being the 'lineage' prior to the start of the 'official' charts. Alan Smith says no, RR was not the industry chart, they were started as a music paper by and for independent record retailers, who were not aligned with nor funded by record companies. In one sense, my words, you could say RR was the anti-industry chart. Alan again: By 1966, RR was more established within the trade, and sections of the industry, but it never succeeded in becoming the full UK record industry chart.

                              RR began in Aug 1959, 7 months later they started producing a record chart. But there was nothing unique or special about this chart, it was put together exactly the same way as the other 4 major charts of the day, carried in music papers for music fans. No boys n girls, the least representative/accurate chart of the 60s should not have been given 'official' status.


                              • If Paul had no say in the decision I suppose it was made by Mike, Tim and Jo. As the 'number cruncher' I expect Jo would have lobbied for simplicity and against any sort of combo work in those pre-PC days. They would have been aware in general terms of the relative merits of the chart providers, and are still around so can speak for themselves, but my hunch is that if MM had continued publishing 50 they would have been chosen instead.

                                Looking down the MM number ones, I'd say the oddest absence (apart from the Beatles and Stones ones already mentioned) is 'Sunny Afternoon'.


                                • It could have been money and ease of locating the earlier charts. Two things spring to mind.

                                  1). Record Retailer had their own chart 1960-1969. They then carried the official chart. When putting the Guinness Book together (beginning around 1975 I think) Music Week would have been the easiest to get the charts from as the 1969-1975 charts would have been easy to locate and Record Retailer would have had archives of their own published issues. Gathering the NME charts for 1952-1959 may have proved difficult to achieve and would definitely have required a payment to NME - something that the RR charts may not have required.

                                  2) Money. Let’s suppose we went with NME 1952-1960 then Melody Maker 1960-1969 (I forget the dates but in general lets say its that for ease right now) then the Official charts 1969-1975. NME would already have been licensed as they had to be. Official would already have been licensed as it would have to be. So would licensing Melody Maker have been too expensive? Don't forget that, although Joel Whitburn books had been selling their was no idea as to the market for such a book in the UK. Joel has only done a few books by 1975 and America had a much more stable chart (at least 1958-1975) so could that have been a decision?

                                  My thought: The reasons for the choice probably made sense at the time (1975). When the book sold, and sold well, they made another edition. It made no sense to go back and re-do all the work of the earlier to replace the charts with something else - particularly in pre computer days. I don't know how the book was stored or calculated, but I imagine it was far cheaper to simply add to rather than start again, as would probably have ben the case. Fast Forward to 1995 and over 20 years the book has become the definitive UK chart book, Radio 1 keep saying their chart is official and the best and the Guinness Book charts that (at least from 1969 onwards) and so the charts have become official because they are accepted as such by everybody who bought the Guinness Book (and I don't mean us here - I meant the casual fan who just wanted to see how well Pink Floyd did and doesn’t care about anything else). Now your the OCC in 2001 deciding on what to actually make Official Cannon. 1969 onwards is easy. It is based on sales and whatever the flaws in particular weeks or whatever it is at least trying to be the National Chart. But you really do want something else for 1960-69. NME is fine for 52-60. But we have the Guinness Book claiming that these are correct and the book is the Bible of Pop (I found an old advert recently that claims just that about it from about 2004) so do you re-write that and tell everybody the Guinness Book isn’t? Now obviously at that point you could go away and pick Melody Maker for 1960-69. But by this stage Melody Maker hasn’t been making their own chart for a very long time (stopped in 1988) and possibly licensing issues prevent some things appearing - NME charts in the Top 40 charts books where STILL not present even up to 2009 when the new Top 40 Charts book was printed.

                                  I can see the logic behind keeping what was as thats easier and possibly cheaper. Plus, it is always possible that the accuracy of one chart over another had been forgotten - remember when the Record Mirror album charts where found in 2003 or so and everybody went wild because they where new and not known (and yet Record Mirror carried the official charts from 1969-1991). Things get forgotten. Knowledge vanishes as people die or forget. One of the good things about the internet is boards like this can keep the knowledge alive long after those have forgotten, died or left.
                         - for the latest are best chart book - By Decade!
                                  Now including NME, Record Mirror and Melody Maker from the UK and some Billboard charts


                                  • Good points. Also, in 1975 the compilers would not have seen the inexorable path from their decision to 'official' that we can in retrospect. Something else could have become the 'Bible' instead.

                                    Perhaps MM themselves could have produced a book including the previously unpublished 31-50s, although a difficulty here might be that they had publicly suggested these positions were unreliable.


                                    • Funny you should say the charts are like the bible, because like the bible they were compiled from different sources. Some of the sources were left out. Some of these are considered more accurate than the "official" version. And of course many people passionately believe in the official version and some like me don't believe in the official version!
                                      Education for anyone aged 12 to 16 has made a mess of the world!


                                      • I once asked Dave Taylor how the BBC calculated EP's into their chart as RR had no position for these in their singles chart.

                                        Dave explained the following. The BBC took the other three chart positions, for example from the Twist and Shout EP, that being 3, 4 and 2 for the other three papers which when totalled equalled 9 points. This figure was then divided by three (as three music papers were used) to get an average of 3 points, which was then multiplied by four (to get an average to cover the missing RR) which gave a total of twelve points. These points gave the BBC chart the chart position of 3 that you mentioned.

                                        Dave also advised back then that for double sided singles, the BBC took the higher chart position of the two sides and ignored the other. In retrospect they would probably have been better using the above formula for these also.


                                        • Thanks for confirming that.

                                          On the B-side method, in the 'Peter Gunn / YEP' example I mentioned above it wouldn't have made any difference only scoring the higher NME entry. But it must have presented a bit of a problem at other times - for example, when 'The Next Time' was number 3 and 'Bachelor Boy' was number 5 (5-Jan-63).


                                          • Originally posted by Splodj View Post
                                            Thanks for confirming that.

                                            On the B-side method, in the 'Peter Gunn / YEP' example I mentioned above it wouldn't have made any difference only scoring the higher NME entry. But it must have presented a bit of a problem at other times - for example, when 'The Next Time' was number 3 and 'Bachelor Boy' was number 5 (5-Jan-63).
                                            Yep I agree totally. That's why the BBC should have used the same method of averaging as they used for EP's which would have given a better average.

                                            On the subject of using 'hierarchy' for chart compilation, another system, simple but more representative, would be to use the method for all chart positions, 20 or 30, as the BBC used from 1964 for determining the number one position.

                                            In practise you then look at every chart position in its own right to see which record has the best stats to claim it.
                                            So, a number one with stats 1 1 1 7 for example would have a better claim than one with 1 2 2 2. For a number two, a record with 2 2 3 4 positions would have a better claim on the position than one with 1 3 3 3 as it holds the number 2 position on two charts whilst the other only qualifies for a top two place on one chart. A number seven for example holding positions 7 7 8 10 would claim this position over a 7 7 9 9 as the number 8 position is closer to the number 7 than a 9 , and so on.

                                            This system treats each chart with respect but also prevents any one chart where the position is way out of step with the others from unfairly influencing the justifiable chart placing, thus ensuring a much more accurate average. For example a record with positions of 7 8 9 15 would then be allocated an average between 7 and 9 and not be dragged lower by the number 15 position which is not in line with all the others.

                                            Where a hierarchy does give a tie like two records holding positions 2 2 3 3 and 3 3 2 2, then as suggested earlier i would simply use whichever music paper used the bigger sample at the time to break the tie.


                                            • It sounds like what you're advocating for, Mr. Tibbs, is determining a 'median' (middle) chart position from among the various charts, and a ranking based on that. Which would throw out the outliers. I did look into that, and it is a valid idea. It also produces a large number of ties, which could be kept as is or you could go to a well thought out tiebreaker. So, a valid option, and doable...


                                              • That method makes a lot of sense to me. If I could wave a magic wand something like the result of that exercise, extended to a Top 50, would be the official chart up to 1969.

                                                You can argue that the BBC were right in not doing that final tie-breaking step for number ones but (putting aside any contractual or moral obligations of non-preference they may have had) I do think there is something odd about a chart representing the sales of records in their thousands (tens of thousands probably at number 1) having tied positions. I thought this at the time, when the charts described themselves simply as reflecting sales, and the knowledge now that it was down to a points system does not make it any better - but I do see merit in the other point of view.

                                                I see less merit in the argument that using RR pre-1969 can be justified on publishing continuity grounds - if claiming that extends beyond the cost and convenience advantage of the compilers. Most people associated the BMRB chart from 1969 with the BBC so I see their chart as the more rightful antecedent.
                                                Last edited by Splodj; Thu September 12th, 2019, 15:21.


                                                • Good point, Splodj. The BBC chart could have been delegated as 'official' by the OCC after 2001 for pre-1969, as that too would have kept the continuity going into the BMRB chart. And as the BBC was an average of the other charts, it would have thus agreed a lot more so with the other charts than did Record Retailer. Someone could have gone back and just calculated the lower missing BBC positions to bring it up to a Top 50. And that can still be done now.

                                                  I'll throw out a piece of info Alan Smith told me. After Alan compiled his chart history info in the early 2000's, after having worked on it for years, and publishing an article in Record Collector magazine, he then took his info to a former higher-up of the OCC, who was astonished at Alan's research. This OCC person and others at the OCC did not know the 'imperfections' of RR at the time the decision was made to declare RR 'official' for the 60s, and he told Alan they most likely would have chosen a different chart for the 60s had they known about his data. Thus apparently the OCC did not do any research of their own to determine a 'best' chart for the 60s, rather it appears they did the easiest thing (as KoS suggests above), and went with the Guinness choice.

                                                  But I should re-emphasize, prior to the OCC declaring (after 2001) RR as 'official' for the 60s, as far as I can tell Guinness never claimed their choice of RR for the 60s was 'correct' or 'the best' or 'official', they only stated which charts they were using 'for the purposes of this book.' Even though the implication may have been there, or readers 'osmosis-ed' into the idea.

                                                  While I'm here, lemme throw out this thought: did the BBC violate it's own anti-commercial policy by endorsing and following the new BMRB charts in Feb 1969? NME, MM, and Top Pops/Music Now were still out there producing charts, and the BBC chose to go with BMRB, favoring one chart over another. One could argue the new BMRB charts were better, but did the BBC nonetheless violate its own policy?


                                                  • I don't think so as MM and NME were invited to the party but declined.


                                                    • Originally posted by Splodj View Post
                                                      I don't think so as MM and NME were invited to the party but declined.
                                                      That's correct they didn't want to contribute to the funding the new chart. They thought their charts were good enough for the purpose they needed them for.

                                                      The BBC were not doing anything different than what they had been doing, making a chart from various sources. The BBC presumably were paying some kind of fee to use the various charts, simply on the grounds of copyright, I suspect. They were not like some chart fanatic, who I dare say in the 60's got the various charts and made up their own chart from them. Or the Pirate Radio stations who would also do something similar. The BEEB had to do it the proper way!
                                                      Education for anyone aged 12 to 16 has made a mess of the world!


                                                      • Yeah the BBC contributed to the cost of the BMRB chart so it was technically their own product in a manner of speaking. Having said that though , the BMRB itself was riddled with problems in its infancy. Using actual physical sales for the first time should have been nigh impossible to get tied positions, yet it achieved this all the way through 1969. Also there were many strange chart movements in this first year. Donald Peers's Please Don't Go dropping from 3 to 10, then climbing back up to 4, then plummeting all the way to 20, Max Romeo's Wet Dream, going from 21 36 15 30 10, to name but two examples.

                                                        Apparently only a fraction of the diaries were coming back on time but also because the BMRB rotated to different shops week to week the returns were inconsistent causing chart hiccups like the above to occur.

                                                        I believe at least through 1969 Melody Maker and NME charts had to be more accurate than BMRB.


                                                        • As the pirates have been mentioned ...

                                                          The first offshore station to have a chart was Radio Atlanta, with a 'Hit Parade' on Saturdays 2 - 3 pm from June 1964. Compiled and presented by an Australian dj Tony Withers who had presented chart shows in Sydney, it was effectively a prediction of what the conventional Top 20 would be the following week.

                                                          In December he became Tony Windsor and, as Head DJ on Radio London, was responsible for compiling their 'Fab 40' broadcast on Sunday afternoons before POTP went out. The first few are missing, but all the charts from February 1965 are on the web and there is a list of number ones on Wikipedia. It was a faster moving prediction chart.

                                                          At first he was determined to resist payola directly or via other djs. Upon seeing the draft of one chart Kenny Everett asked him to simply listen again to 'Concrete and Clay' which then entered the chart at 31 on 14-Feb-65. The station's output was based on heavy rotation of the 40 so they were then more likely to enter the real charts. Eventually the station succumbed to payola and it became riddled with paid for entries, including 2 obviously bought number ones.

                                                          The 'Fab 40 Show' was voted best radio show by MM readers, and it's Top 10 appeared in Disc beside its own chart. In February 1967 compilation of it was taken over by Alan Keen, who later went on to become General Manager at Radio Luxembourg where the NME chart was replaced by their own in-house prediction Top 30.

                                                          A prediction or faster moving chart was not adopted by any IBA stations. In any case when research became more sophisticated a lot of radio chiefs were surprised to find that people wanted to listen to records for longer than they had previously assumed.

                                                          Anyway, sorry for the detour!


                                                          • I understand that although they claimed to use a lot of shops in the sample, BMRB actually used around 50 from the sample of 250 for chart compilation. So as you were saying rotating the 50 amongst the 250, would lead to some shops not actually selling the records. One of the reasons for this could be distribution problems, either due to the record company problems, or factors such as weather, industrial disputes. Shops had to also purchase the stock of any records. So they had to be convinced it would sell. Some local area sales could put a cog in the works, such as an artist selling well in their area, Tom Jones in Wales for instance. If you had five shops from Wales in the sample one week, then only two next week, that would see a fall for Tom Jones!
                                                            Apparently in order to produce a "National Chart" many local selling records could be excluded from the charts. For instance very high sales of say a football team in one area, would have been excluded on the grounds it wasn't a "National" record. Even if it had sales enough to put it in the top ten!
                                                            The BMRB had to do some calculations to get the sample to represent those stores that they couldn't include. So one branch of HMV could represent 30 or more of the same store.
                                                            I have always argued that this system was flawed. For one branch might have a sunny day, with plenty of trade, whilst another branch had less than 10 customers all day due to say heavy snowfalls!
                                                            I think it would have been more honest for them to simply use all the shops that managed to get the returns in on time and mention that the charts was based on say 190 shops out of 4,000 (depending on how many the UK had) on the published chart.
                                                            Education for anyone aged 12 to 16 has made a mess of the world!


                                                            • I wonder how much of the analysis at the back of the Guinness book ('Biggest falls from number 1' etc.) is distorted by using only one chart.